mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
Sync to trunk.
This commit is contained in:
commit
d7b6237c93
115
Changelog.yaml
115
Changelog.yaml
@ -5,7 +5,7 @@
|
|||||||
# Also, each release can have new and improved recipes.
|
# Also, each release can have new and improved recipes.
|
||||||
|
|
||||||
# - version: ?.?.?
|
# - version: ?.?.?
|
||||||
# date: 2012-??-??
|
# date: 2013-??-??
|
||||||
#
|
#
|
||||||
# new features:
|
# new features:
|
||||||
# - title:
|
# - title:
|
||||||
@ -19,6 +19,119 @@
|
|||||||
# new recipes:
|
# new recipes:
|
||||||
# - title:
|
# - title:
|
||||||
|
|
||||||
|
- version: 0.9.14
|
||||||
|
date: 2013-01-11
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "When adding multiple books and duplicates are found, allow the user to select which of the duplicate books will be added anyway."
|
||||||
|
tickets: [1095256]
|
||||||
|
|
||||||
|
- title: "Device drivers for Kobo Arc on linux, Polaroid Android tablet"
|
||||||
|
tickets: [1098049]
|
||||||
|
|
||||||
|
- title: "When sorting by series, use the language of the book to decide what leading articles to remove, just as is done for sorting by title"
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "PDF Output: Do not error out when the input document contains links with anchors not present in the document."
|
||||||
|
tickets: [1096428]
|
||||||
|
|
||||||
|
- title: "Add support for upgraded db on newest Kobo firmware"
|
||||||
|
tickets: [1095617]
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix typo that broke use of custom paper sizes."
|
||||||
|
tickets: [1097563]
|
||||||
|
|
||||||
|
- title: "PDF Output: Handle empty anchors present at the end of a page"
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix side margins of last page in a flow being incorrect when large side margins are used."
|
||||||
|
tickets: [1096290]
|
||||||
|
|
||||||
|
- title: "Edit metadata dialog: Allow setting the series number for custom series type columns to zero"
|
||||||
|
|
||||||
|
- title: "When bulk editing custom series-type columns and not provding a series number use 1 as the default, instead of None"
|
||||||
|
|
||||||
|
- title: "Catalogs: Fix issue with catalog generation using Hungarian UI and author_sort beginning with multiple letter groups."
|
||||||
|
tickets: [1091581]
|
||||||
|
|
||||||
|
- title: "PDF Output: Dont error out on files that have invalid font-family declarations."
|
||||||
|
tickets: [1096279]
|
||||||
|
|
||||||
|
- title: "Do not load QRawFont at global level, to allow calibre installation on systems with missing dependencies"
|
||||||
|
tickets: [1096170]
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix cover not present in generated PDF files"
|
||||||
|
tickets: [1096098]
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- Sueddeutsche Zeitung mobil
|
||||||
|
- Boerse Online
|
||||||
|
- TidBits
|
||||||
|
- New York Review of Books
|
||||||
|
- Fleshbot
|
||||||
|
- Il Messaggero
|
||||||
|
- Libero
|
||||||
|
|
||||||
|
new recipes:
|
||||||
|
- title: Spectator Magazine, Oxford Mail and Outside Magazine
|
||||||
|
author: Krittika Goyal
|
||||||
|
|
||||||
|
- title: Libartes
|
||||||
|
author: Darko Miletic
|
||||||
|
|
||||||
|
- title: El Diplo
|
||||||
|
author: Tomas De Domenico
|
||||||
|
|
||||||
|
- version: 0.9.13
|
||||||
|
date: 2013-01-04
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "Complete rewrite of the PDF Output engine, to support links and fix various bugs"
|
||||||
|
type: major
|
||||||
|
description: "calibre now has a new PDF output engine that supports links in the text. It also fixes various bugs, detailed below. In order to implement support for links and fix these bugs, the engine had to be completely rewritten, so there may be some regressions."
|
||||||
|
|
||||||
|
- title: "Show disabled device plugins in Preferences->Ignored Devices"
|
||||||
|
|
||||||
|
- title: "Get Books: Fix Smashwords, Google books and B&N stores. Add Nook UK store"
|
||||||
|
|
||||||
|
- title: "Allow series numbers lower than -100 for custom series columns."
|
||||||
|
tickets: [1094475]
|
||||||
|
|
||||||
|
- title: "Add mass storage driver for rockhip based android smart phones"
|
||||||
|
tickets: [1087809]
|
||||||
|
|
||||||
|
- title: "Add a clear ratings button to the edit metadata dialog"
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "PDF Output: Fix custom page sizes not working on OS X"
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix embedding of many fonts not supported (note that embedding of OpenType fonts with Postscript outlines is still not supported on windows, though it is supported on other operating systems)"
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix crashes converting some books to PDF on OS X"
|
||||||
|
tickets: [1087688]
|
||||||
|
|
||||||
|
- title: "HTML Input: Handle entities inside href attributes when following the links in an HTML file."
|
||||||
|
tickets: [1094203]
|
||||||
|
|
||||||
|
- title: "Content server: Fix custom icons not used for sub categories"
|
||||||
|
tickets: [1095016]
|
||||||
|
|
||||||
|
- title: "Force use of non-unicode constants in compiled templates. Fixes a problem with regular expression character classes and probably other things."
|
||||||
|
|
||||||
|
- title: "Kobo driver: Do not error out if there are invalid dates in the device database"
|
||||||
|
tickets: [1094597]
|
||||||
|
|
||||||
|
- title: "Content server: Fix for non-unicode hostnames when using mDNS"
|
||||||
|
tickets: [1094063]
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- Today's Zaman
|
||||||
|
- The Economist
|
||||||
|
- Foreign Affairs
|
||||||
|
- New York Times
|
||||||
|
- Alternet
|
||||||
|
- Harper's Magazine
|
||||||
|
- La Stampa
|
||||||
|
|
||||||
- version: 0.9.12
|
- version: 0.9.12
|
||||||
date: 2012-12-28
|
date: 2012-12-28
|
||||||
|
|
||||||
|
10
README
10
README
@ -1,7 +1,7 @@
|
|||||||
calibre is an e-book library manager. It can view, convert and catalog e-books \
|
calibre is an e-book library manager. It can view, convert and catalog e-books
|
||||||
in most of the major e-book formats. It can also talk to e-book reader \
|
in most of the major e-book formats. It can also talk to e-book reader
|
||||||
devices. It can go out to the internet and fetch metadata for your books. \
|
devices. It can go out to the internet and fetch metadata for your books.
|
||||||
It can download newspapers and convert them into e-books for convenient \
|
It can download newspapers and convert them into e-books for convenient
|
||||||
reading. It is cross platform, running on Linux, Windows and OS X.
|
reading. It is cross platform, running on Linux, Windows and OS X.
|
||||||
|
|
||||||
For screenshots: https://calibre-ebook.com/demo
|
For screenshots: https://calibre-ebook.com/demo
|
||||||
@ -15,5 +15,5 @@ bzr branch lp:calibre
|
|||||||
To update your copy of the source code:
|
To update your copy of the source code:
|
||||||
bzr merge
|
bzr merge
|
||||||
|
|
||||||
Tarballs of the source code for each release are now available \
|
Tarballs of the source code for each release are now available
|
||||||
at http://code.google.com/p/calibre-ebook
|
at http://code.google.com/p/calibre-ebook
|
||||||
|
@ -164,7 +164,6 @@ Follow these steps to find the problem:
|
|||||||
* Ensure your operating system is seeing the device. That is, the device should show up in Windows Explorer (in Windows) or Finder (in OS X).
|
* Ensure your operating system is seeing the device. That is, the device should show up in Windows Explorer (in Windows) or Finder (in OS X).
|
||||||
* In |app|, go to Preferences->Ignored Devices and check that your device
|
* In |app|, go to Preferences->Ignored Devices and check that your device
|
||||||
is not being ignored
|
is not being ignored
|
||||||
* In |app|, go to Preferences->Plugins->Device Interface plugin and make sure the plugin for your device is enabled, the plugin icon next to it should be green when it is enabled.
|
|
||||||
* If all the above steps fail, go to Preferences->Miscellaneous and click debug device detection with your device attached and post the output as a ticket on `the calibre bug tracker <http://bugs.calibre-ebook.com>`_.
|
* If all the above steps fail, go to Preferences->Miscellaneous and click debug device detection with your device attached and post the output as a ticket on `the calibre bug tracker <http://bugs.calibre-ebook.com>`_.
|
||||||
|
|
||||||
My device is non-standard or unusual. What can I do to connect to it?
|
My device is non-standard or unusual. What can I do to connect to it?
|
||||||
@ -438,10 +437,10 @@ that allows you to create collections on your Kindle from the |app| metadata. It
|
|||||||
|
|
||||||
.. note:: Amazon have removed the ability to manipulate collections completely in their newer models, like the Kindle Touch and Kindle Fire, making even the above plugin useless. If you really want the ability to manage collections on your Kindle via a USB connection, we encourage you to complain to Amazon about it, or get a reader where this is supported, like the SONY or Kobo Readers.
|
.. note:: Amazon have removed the ability to manipulate collections completely in their newer models, like the Kindle Touch and Kindle Fire, making even the above plugin useless. If you really want the ability to manage collections on your Kindle via a USB connection, we encourage you to complain to Amazon about it, or get a reader where this is supported, like the SONY or Kobo Readers.
|
||||||
|
|
||||||
I am getting an error when I try to use |app| with my Kobo Touch?
|
I am getting an error when I try to use |app| with my Kobo Touch/Glo/etc.?
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Kobo Touch has very buggy firmware. Connecting to it has been known to fail at random. Certain combinations of motherboard, USB ports/cables/hubs can exacerbate this tendency to fail. If you are getting an error when connecting to your touch with |app| try the following, each of which has solved the problem for *some* |app| users.
|
The Kobo has very buggy firmware. Connecting to it has been known to fail at random. Certain combinations of motherboard, USB ports/cables/hubs can exacerbate this tendency to fail. If you are getting an error when connecting to your touch with |app| try the following, each of which has solved the problem for *some* |app| users.
|
||||||
|
|
||||||
* Connect the Kobo directly to your computer, not via USB Hub
|
* Connect the Kobo directly to your computer, not via USB Hub
|
||||||
* Try a different USB cable and a different USB port on your computer
|
* Try a different USB cable and a different USB port on your computer
|
||||||
@ -673,6 +672,19 @@ There are three possible things I know of, that can cause this:
|
|||||||
* The Logitech SetPoint Settings application causes random crashes in
|
* The Logitech SetPoint Settings application causes random crashes in
|
||||||
|app| when it is open. Close it before starting |app|.
|
|app| when it is open. Close it before starting |app|.
|
||||||
|
|
||||||
|
If none of the above apply to you, then there is some other program on your
|
||||||
|
computer that is interfering with |app|. First reboot your computer is safe
|
||||||
|
mode, to have as few running programs as possible, and see if the crashes still
|
||||||
|
happen. If they do not, then you know it is some program causing the problem.
|
||||||
|
The most likely such culprit is a program that modifies other programs'
|
||||||
|
behavior, such as an antivirus, a device driver, something like RoboForm (an
|
||||||
|
automatic form filling app) or an assistive technology like Voice Control or a
|
||||||
|
Screen Reader.
|
||||||
|
|
||||||
|
The only way to find the culprit is to eliminate the programs one by one and
|
||||||
|
see which one is causing the issue. Basically, stop a program, run calibre,
|
||||||
|
check for crashes. If they still happen, stop another program and repeat.
|
||||||
|
|
||||||
|app| is not starting on OS X?
|
|app| is not starting on OS X?
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -10,14 +10,12 @@ class Alternet(BasicNewsRecipe):
|
|||||||
category = 'News, Magazine'
|
category = 'News, Magazine'
|
||||||
description = 'News magazine and online community'
|
description = 'News magazine and online community'
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'Front Page', u'http://feeds.feedblitz.com/alternet'),
|
(u'Front Page', u'http://feeds.feedblitz.com/alternet')
|
||||||
(u'Breaking News', u'http://feeds.feedblitz.com/alternet_breaking_news'),
|
|
||||||
(u'Top Ten Campaigns', u'http://feeds.feedblitz.com/alternet_top_10_campaigns'),
|
|
||||||
(u'Special Coverage Areas', u'http://feeds.feedblitz.com/alternet_coverage')
|
|
||||||
]
|
]
|
||||||
|
|
||||||
remove_attributes = ['width', 'align','cellspacing']
|
remove_attributes = ['width', 'align','cellspacing']
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
use_embedded_content = False
|
use_embedded_content = True
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
language = 'en'
|
language = 'en'
|
||||||
encoding = 'UTF-8'
|
encoding = 'UTF-8'
|
||||||
|
@ -1,33 +1,36 @@
|
|||||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
class AdvancedUserRecipe1303841067(BasicNewsRecipe):
|
class AdvancedUserRecipe1303841067(BasicNewsRecipe):
|
||||||
|
|
||||||
title = u'Börse-online'
|
title = u'Börse-online'
|
||||||
__author__ = 'schuster'
|
__author__ = 'schuster, Armin Geller'
|
||||||
oldest_article = 1
|
oldest_article = 1
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
use_embedded_content = False
|
use_embedded_content = False
|
||||||
language = 'de'
|
language = 'de'
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
cover_url = 'http://www.dpv.de/images/1995/source.gif'
|
encoding = 'iso-8859-1'
|
||||||
masthead_url = 'http://www.zeitschriften-cover.de/cover/boerse-online-cover-januar-2010-x1387.jpg'
|
timefmt = ' [%a, %d %b %Y]'
|
||||||
extra_css = '''
|
|
||||||
h1{font-family:Arial,Helvetica,sans-serif; font-weight:bold;font-size:large;}
|
|
||||||
h4{font-family:Arial,Helvetica,sans-serif; font-weight:normal;font-size:small;}
|
cover_url = 'http://www.wirtschaftsmedien-shop.de/s/media/coverimages/7576_2013107.jpg'
|
||||||
img {min-width:300px; max-width:600px; min-height:300px; max-height:800px}
|
masthead_url = 'http://upload.wikimedia.org/wikipedia/de/5/56/B%C3%B6rse_Online_Logo.svg'
|
||||||
p{font-family:Arial,Helvetica,sans-serif;font-size:small;}
|
|
||||||
body{font-family:Helvetica,Arial,sans-serif;font-size:small;}
|
|
||||||
'''
|
|
||||||
remove_tags_bevor = [dict(name='h3')]
|
|
||||||
remove_tags_after = [dict(name='div', attrs={'class':'artikelfuss'})]
|
|
||||||
remove_tags = [dict(attrs={'class':['moduleTopNav', 'moduleHeaderNav', 'text', 'blau', 'poll1150']}),
|
|
||||||
dict(id=['newsletterlayer', 'newsletterlayerClose', 'newsletterlayer_body', 'newsletterarray_error', 'newsletterlayer_emailadress', 'newsletterlayer_submit', 'kommentar']),
|
|
||||||
dict(name=['h2', 'Gesamtranking', 'h3',''])]
|
|
||||||
|
|
||||||
|
remove_tags_after = [dict(name='div', attrs={'class':['artikelfuss', 'rahmen600']})]
|
||||||
|
|
||||||
|
remove_tags = [
|
||||||
|
dict(name='div', attrs={'id':['breadcrumb', 'rightCol', 'clearall']}),
|
||||||
|
dict(name='div', attrs={'class':['footer', 'artikelfuss']}),
|
||||||
|
]
|
||||||
|
|
||||||
|
keep_only_tags = [
|
||||||
|
dict(name='div', attrs={'id':['contentWrapper']})
|
||||||
|
]
|
||||||
|
|
||||||
|
feeds = [(u'Börsennachrichten', u'http://www.boerse-online.de/rss/')]
|
||||||
|
|
||||||
def print_version(self, url):
|
def print_version(self, url):
|
||||||
return url.replace('.html#nv=rss', '.html?mode=print')
|
return url.replace('.html#nv=rss', '.html?mode=print')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
feeds = [(u'Börsennachrichten', u'http://www.boerse-online.de/rss/')]
|
|
||||||
|
|
||||||
|
@ -11,16 +11,15 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
|||||||
category = 'news'
|
category = 'news'
|
||||||
encoding = 'UTF-8'
|
encoding = 'UTF-8'
|
||||||
keep_only_tags = [
|
keep_only_tags = [
|
||||||
dict(name='div', attrs={'id':'article_body_container'}),
|
dict(name='div', attrs={'id':'article_body_container'}),
|
||||||
]
|
]
|
||||||
remove_tags = [dict(name='ui'),dict(name='li')]
|
remove_tags = [dict(name='ui'),dict(name='li'),dict(name='div', attrs={'id':['share-email']})]
|
||||||
no_javascript = True
|
no_javascript = True
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
|
|
||||||
cover_url = 'http://images.businessweek.com/mz/covers/current_120x160.jpg'
|
cover_url = 'http://images.businessweek.com/mz/covers/current_120x160.jpg'
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
|
|
||||||
#Go to the issue
|
#Go to the issue
|
||||||
soup = self.index_to_soup('http://www.businessweek.com/magazine/news/articles/business_news.htm')
|
soup = self.index_to_soup('http://www.businessweek.com/magazine/news/articles/business_news.htm')
|
||||||
|
|
||||||
@ -47,7 +46,6 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
|||||||
if section_title not in feeds:
|
if section_title not in feeds:
|
||||||
feeds[section_title] = []
|
feeds[section_title] = []
|
||||||
feeds[section_title] += articles
|
feeds[section_title] += articles
|
||||||
|
|
||||||
div1 = soup.find ('div', attrs={'class':'column center'})
|
div1 = soup.find ('div', attrs={'class':'column center'})
|
||||||
section_title = ''
|
section_title = ''
|
||||||
for div in div1.findAll('h5'):
|
for div in div1.findAll('h5'):
|
||||||
|
@ -12,10 +12,10 @@ class Chronicle(BasicNewsRecipe):
|
|||||||
category = 'news'
|
category = 'news'
|
||||||
encoding = 'UTF-8'
|
encoding = 'UTF-8'
|
||||||
keep_only_tags = [
|
keep_only_tags = [
|
||||||
dict(name='div', attrs={'class':'article'}),
|
dict(name='div', attrs={'class':['article','blog-mod']}),
|
||||||
]
|
]
|
||||||
remove_tags = [dict(name='div',attrs={'class':['related module1','maintitle']}),
|
remove_tags = [dict(name='div',attrs={'class':['related module1','maintitle','entry-utility','object-meta']}),
|
||||||
dict(name='div', attrs={'id':['section-nav','icon-row', 'enlarge-popup']}),
|
dict(name='div', attrs={'id':['section-nav','icon-row', 'enlarge-popup','confirm-popup']}),
|
||||||
dict(name='a', attrs={'class':'show-enlarge enlarge'})]
|
dict(name='a', attrs={'class':'show-enlarge enlarge'})]
|
||||||
no_javascript = True
|
no_javascript = True
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
|
@ -70,18 +70,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
return br
|
return br
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def get_cover_url(self):
|
|
||||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
|
||||||
div = soup.find('div', attrs={'class':lambda x: x and
|
|
||||||
'print-cover-links' in x})
|
|
||||||
a = div.find('a', href=True)
|
|
||||||
url = a.get('href')
|
|
||||||
if url.startswith('/'):
|
|
||||||
url = 'http://www.economist.com' + url
|
|
||||||
soup = self.index_to_soup(url)
|
|
||||||
div = soup.find('div', attrs={'class':'cover-content'})
|
|
||||||
img = div.find('img', src=True)
|
|
||||||
return img.get('src')
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
return self.economist_parse_index()
|
return self.economist_parse_index()
|
||||||
@ -92,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if div is not None:
|
if div is not None:
|
||||||
img = div.find('img', src=True)
|
img = div.find('img', src=True)
|
||||||
if img is not None:
|
if img is not None:
|
||||||
self.cover_url = img['src']
|
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||||
feeds = OrderedDict()
|
feeds = OrderedDict()
|
||||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||||
x}):
|
x}):
|
||||||
|
@ -9,7 +9,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
|
|||||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
import time, re
|
import re
|
||||||
|
|
||||||
class Economist(BasicNewsRecipe):
|
class Economist(BasicNewsRecipe):
|
||||||
|
|
||||||
@ -37,7 +37,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
padding: 7px 0px 9px;
|
padding: 7px 0px 9px;
|
||||||
}
|
}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
oldest_article = 7.0
|
oldest_article = 7.0
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
||||||
@ -46,7 +45,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
{'class': lambda x: x and 'share-links-header' in x},
|
{'class': lambda x: x and 'share-links-header' in x},
|
||||||
]
|
]
|
||||||
keep_only_tags = [dict(id='ec-article-body')]
|
keep_only_tags = [dict(id='ec-article-body')]
|
||||||
needs_subscription = False
|
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
||||||
lambda x:'</html>')]
|
lambda x:'</html>')]
|
||||||
@ -55,28 +53,26 @@ class Economist(BasicNewsRecipe):
|
|||||||
# downloaded with connection reset by peer (104) errors.
|
# downloaded with connection reset by peer (104) errors.
|
||||||
delay = 1
|
delay = 1
|
||||||
|
|
||||||
def get_cover_url(self):
|
needs_subscription = False
|
||||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
'''
|
||||||
div = soup.find('div', attrs={'class':lambda x: x and
|
def get_browser(self):
|
||||||
'print-cover-links' in x})
|
br = BasicNewsRecipe.get_browser()
|
||||||
a = div.find('a', href=True)
|
if self.username and self.password:
|
||||||
url = a.get('href')
|
br.open('http://www.economist.com/user/login')
|
||||||
if url.startswith('/'):
|
br.select_form(nr=1)
|
||||||
url = 'http://www.economist.com' + url
|
br['name'] = self.username
|
||||||
soup = self.index_to_soup(url)
|
br['pass'] = self.password
|
||||||
div = soup.find('div', attrs={'class':'cover-content'})
|
res = br.submit()
|
||||||
img = div.find('img', src=True)
|
raw = res.read()
|
||||||
return img.get('src')
|
if '>Log out<' not in raw:
|
||||||
|
raise ValueError('Failed to login to economist.com. '
|
||||||
|
'Check your username and password.')
|
||||||
|
return br
|
||||||
|
'''
|
||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
try:
|
return self.economist_parse_index()
|
||||||
return self.economist_parse_index()
|
|
||||||
except:
|
|
||||||
raise
|
|
||||||
self.log.warn(
|
|
||||||
'Initial attempt to parse index failed, retrying in 30 seconds')
|
|
||||||
time.sleep(30)
|
|
||||||
return self.economist_parse_index()
|
|
||||||
|
|
||||||
def economist_parse_index(self):
|
def economist_parse_index(self):
|
||||||
soup = self.index_to_soup(self.INDEX)
|
soup = self.index_to_soup(self.INDEX)
|
||||||
@ -84,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if div is not None:
|
if div is not None:
|
||||||
img = div.find('img', src=True)
|
img = div.find('img', src=True)
|
||||||
if img is not None:
|
if img is not None:
|
||||||
self.cover_url = img['src']
|
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||||
feeds = OrderedDict()
|
feeds = OrderedDict()
|
||||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||||
x}):
|
x}):
|
||||||
@ -151,154 +147,3 @@ class Economist(BasicNewsRecipe):
|
|||||||
div.insert(2, img)
|
div.insert(2, img)
|
||||||
table.replaceWith(div)
|
table.replaceWith(div)
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
'''
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
|
||||||
from calibre.utils.threadpool import ThreadPool, makeRequests
|
|
||||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
|
||||||
import time, string, re
|
|
||||||
from datetime import datetime
|
|
||||||
from lxml import html
|
|
||||||
|
|
||||||
class Economist(BasicNewsRecipe):
|
|
||||||
|
|
||||||
title = 'The Economist (RSS)'
|
|
||||||
language = 'en'
|
|
||||||
|
|
||||||
__author__ = "Kovid Goyal"
|
|
||||||
description = ('Global news and current affairs from a European'
|
|
||||||
' perspective. Best downloaded on Friday mornings (GMT).'
|
|
||||||
' Much slower than the print edition based version.')
|
|
||||||
extra_css = '.headline {font-size: x-large;} \n h2 { font-size: small; } \n h1 { font-size: medium; }'
|
|
||||||
oldest_article = 7.0
|
|
||||||
cover_url = 'http://media.economist.com/sites/default/files/imagecache/print-cover-thumbnail/print-covers/currentcoverus_large.jpg'
|
|
||||||
#cover_url = 'http://www.economist.com/images/covers/currentcoverus_large.jpg'
|
|
||||||
remove_tags = [
|
|
||||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
|
||||||
dict(attrs={'class':['dblClkTrk', 'ec-article-info',
|
|
||||||
'share_inline_header', 'related-items']}),
|
|
||||||
{'class': lambda x: x and 'share-links-header' in x},
|
|
||||||
]
|
|
||||||
keep_only_tags = [dict(id='ec-article-body')]
|
|
||||||
no_stylesheets = True
|
|
||||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
|
||||||
lambda x:'</html>')]
|
|
||||||
|
|
||||||
def parse_index(self):
|
|
||||||
from calibre.web.feeds.feedparser import parse
|
|
||||||
if self.test:
|
|
||||||
self.oldest_article = 14.0
|
|
||||||
raw = self.index_to_soup(
|
|
||||||
'http://feeds.feedburner.com/economist/full_print_edition',
|
|
||||||
raw=True)
|
|
||||||
entries = parse(raw).entries
|
|
||||||
pool = ThreadPool(10)
|
|
||||||
self.feed_dict = {}
|
|
||||||
requests = []
|
|
||||||
for i, item in enumerate(entries):
|
|
||||||
title = item.get('title', _('Untitled article'))
|
|
||||||
published = item.date_parsed
|
|
||||||
if not published:
|
|
||||||
published = time.gmtime()
|
|
||||||
utctime = datetime(*published[:6])
|
|
||||||
delta = datetime.utcnow() - utctime
|
|
||||||
if delta.days*24*3600 + delta.seconds > 24*3600*self.oldest_article:
|
|
||||||
self.log.debug('Skipping article %s as it is too old.'%title)
|
|
||||||
continue
|
|
||||||
link = item.get('link', None)
|
|
||||||
description = item.get('description', '')
|
|
||||||
author = item.get('author', '')
|
|
||||||
|
|
||||||
requests.append([i, link, title, description, author, published])
|
|
||||||
if self.test:
|
|
||||||
requests = requests[:4]
|
|
||||||
requests = makeRequests(self.process_eco_feed_article, requests, self.eco_article_found,
|
|
||||||
self.eco_article_failed)
|
|
||||||
for r in requests: pool.putRequest(r)
|
|
||||||
pool.wait()
|
|
||||||
|
|
||||||
return self.eco_sort_sections([(t, a) for t, a in
|
|
||||||
self.feed_dict.items()])
|
|
||||||
|
|
||||||
def eco_sort_sections(self, feeds):
|
|
||||||
if not feeds:
|
|
||||||
raise ValueError('No new articles found')
|
|
||||||
order = {
|
|
||||||
'The World This Week': 1,
|
|
||||||
'Leaders': 2,
|
|
||||||
'Letters': 3,
|
|
||||||
'Briefing': 4,
|
|
||||||
'Business': 5,
|
|
||||||
'Finance And Economics': 6,
|
|
||||||
'Science & Technology': 7,
|
|
||||||
'Books & Arts': 8,
|
|
||||||
'International': 9,
|
|
||||||
'United States': 10,
|
|
||||||
'Asia': 11,
|
|
||||||
'Europe': 12,
|
|
||||||
'The Americas': 13,
|
|
||||||
'Middle East & Africa': 14,
|
|
||||||
'Britain': 15,
|
|
||||||
'Obituary': 16,
|
|
||||||
}
|
|
||||||
return sorted(feeds, cmp=lambda x,y:cmp(order.get(x[0], 100),
|
|
||||||
order.get(y[0], 100)))
|
|
||||||
|
|
||||||
def process_eco_feed_article(self, args):
|
|
||||||
from calibre import browser
|
|
||||||
i, url, title, description, author, published = args
|
|
||||||
br = browser()
|
|
||||||
ret = br.open(url)
|
|
||||||
raw = ret.read()
|
|
||||||
url = br.geturl().split('?')[0]+'/print'
|
|
||||||
root = html.fromstring(raw)
|
|
||||||
matches = root.xpath('//*[@class = "ec-article-info"]')
|
|
||||||
feedtitle = 'Miscellaneous'
|
|
||||||
if matches:
|
|
||||||
feedtitle = string.capwords(html.tostring(matches[-1], method='text',
|
|
||||||
encoding=unicode).split('|')[-1].strip())
|
|
||||||
return (i, feedtitle, url, title, description, author, published)
|
|
||||||
|
|
||||||
def eco_article_found(self, req, result):
|
|
||||||
from calibre.web.feeds import Article
|
|
||||||
i, feedtitle, link, title, description, author, published = result
|
|
||||||
self.log('Found print version for article:', title, 'in', feedtitle,
|
|
||||||
'at', link)
|
|
||||||
|
|
||||||
a = Article(i, title, link, author, description, published, '')
|
|
||||||
|
|
||||||
article = dict(title=a.title, description=a.text_summary,
|
|
||||||
date=time.strftime(self.timefmt, a.date), author=a.author, url=a.url)
|
|
||||||
if feedtitle not in self.feed_dict:
|
|
||||||
self.feed_dict[feedtitle] = []
|
|
||||||
self.feed_dict[feedtitle].append(article)
|
|
||||||
|
|
||||||
def eco_article_failed(self, req, tb):
|
|
||||||
self.log.error('Failed to download %s with error:'%req.args[0][2])
|
|
||||||
self.log.debug(tb)
|
|
||||||
|
|
||||||
def eco_find_image_tables(self, soup):
|
|
||||||
for x in soup.findAll('table', align=['right', 'center']):
|
|
||||||
if len(x.findAll('font')) in (1,2) and len(x.findAll('img')) == 1:
|
|
||||||
yield x
|
|
||||||
|
|
||||||
def postprocess_html(self, soup, first):
|
|
||||||
body = soup.find('body')
|
|
||||||
for name, val in body.attrs:
|
|
||||||
del body[name]
|
|
||||||
for table in list(self.eco_find_image_tables(soup)):
|
|
||||||
caption = table.find('font')
|
|
||||||
img = table.find('img')
|
|
||||||
div = Tag(soup, 'div')
|
|
||||||
div['style'] = 'text-align:left;font-size:70%'
|
|
||||||
ns = NavigableString(self.tag_to_string(caption))
|
|
||||||
div.insert(0, ns)
|
|
||||||
div.insert(1, Tag(soup, 'br'))
|
|
||||||
img.extract()
|
|
||||||
del img['width']
|
|
||||||
del img['height']
|
|
||||||
div.insert(2, img)
|
|
||||||
table.replaceWith(div)
|
|
||||||
return soup
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
118
recipes/el_diplo.recipe
Normal file
118
recipes/el_diplo.recipe
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
# Copyright 2013 Tomás Di Domenico
|
||||||
|
#
|
||||||
|
# This is a news fetching recipe for the Calibre ebook software, for
|
||||||
|
# fetching the Cono Sur edition of Le Monde Diplomatique (www.eldiplo.org).
|
||||||
|
#
|
||||||
|
# This recipe is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This software is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this recipe. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import re
|
||||||
|
from contextlib import closing
|
||||||
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
from calibre.ptempfile import PersistentTemporaryFile
|
||||||
|
from calibre.utils.magick import Image
|
||||||
|
|
||||||
|
class ElDiplo_Recipe(BasicNewsRecipe):
|
||||||
|
title = u'El Diplo'
|
||||||
|
__author__ = 'Tomas Di Domenico'
|
||||||
|
description = 'Publicacion mensual de Le Monde Diplomatique, edicion Argentina'
|
||||||
|
langauge = 'es_AR'
|
||||||
|
needs_subscription = True
|
||||||
|
auto_cleanup = True
|
||||||
|
|
||||||
|
def get_cover(self,url):
|
||||||
|
tmp_cover = PersistentTemporaryFile(suffix = ".jpg", prefix = "eldiplo_")
|
||||||
|
self.cover_url = tmp_cover.name
|
||||||
|
|
||||||
|
with closing(self.browser.open(url)) as r:
|
||||||
|
imgdata = r.read()
|
||||||
|
|
||||||
|
img = Image()
|
||||||
|
img.load(imgdata)
|
||||||
|
img.crop(img.size[0],img.size[1]/2,0,0)
|
||||||
|
|
||||||
|
img.save(tmp_cover.name)
|
||||||
|
|
||||||
|
def get_browser(self):
|
||||||
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
if self.username is not None and self.password is not None:
|
||||||
|
br.open('http://www.eldiplo.org/index.php/login/-/do_login/index.html')
|
||||||
|
br.select_form(nr=3)
|
||||||
|
br['uName'] = self.username
|
||||||
|
br['uPassword'] = self.password
|
||||||
|
br.submit()
|
||||||
|
self.browser = br
|
||||||
|
return br
|
||||||
|
|
||||||
|
def parse_index(self):
|
||||||
|
default_sect = 'General'
|
||||||
|
articles = {default_sect:[]}
|
||||||
|
ans = [default_sect]
|
||||||
|
sectionsmarker = 'DOSSIER_TITLE: '
|
||||||
|
sectionsre = re.compile('^'+sectionsmarker)
|
||||||
|
|
||||||
|
soup = self.index_to_soup('http://www.eldiplo.org/index.php')
|
||||||
|
|
||||||
|
coverdivs = soup.findAll(True,attrs={'id':['lmd-foto']})
|
||||||
|
a = coverdivs[0].find('a', href=True)
|
||||||
|
coverurl = a['href'].split("?imagen=")[1]
|
||||||
|
self.get_cover(coverurl)
|
||||||
|
|
||||||
|
thedivs = soup.findAll(True,attrs={'class':['lmd-leermas']})
|
||||||
|
for div in thedivs:
|
||||||
|
a = div.find('a', href=True)
|
||||||
|
if 'Sumario completo' in self.tag_to_string(a, use_alt=True):
|
||||||
|
summaryurl = re.sub(r'\?.*', '', a['href'])
|
||||||
|
summaryurl = 'http://www.eldiplo.org' + summaryurl
|
||||||
|
|
||||||
|
for pagenum in xrange(1,10):
|
||||||
|
soup = self.index_to_soup('{0}/?cms1_paging_p_b32={1}'.format(summaryurl,pagenum))
|
||||||
|
thedivs = soup.findAll(True,attrs={'class':['interna']})
|
||||||
|
|
||||||
|
if len(thedivs) == 0:
|
||||||
|
break
|
||||||
|
|
||||||
|
for div in thedivs:
|
||||||
|
section = div.find(True,text=sectionsre).replace(sectionsmarker,'')
|
||||||
|
if section == '':
|
||||||
|
section = default_sect
|
||||||
|
|
||||||
|
if section not in articles.keys():
|
||||||
|
articles[section] = []
|
||||||
|
ans.append(section)
|
||||||
|
|
||||||
|
nota = div.find(True,attrs={'class':['lmd-pl-titulo-nota-dossier']})
|
||||||
|
a = nota.find('a', href=True)
|
||||||
|
if not a:
|
||||||
|
continue
|
||||||
|
|
||||||
|
url = re.sub(r'\?.*', '', a['href'])
|
||||||
|
url = 'http://www.eldiplo.org' + url
|
||||||
|
title = self.tag_to_string(a, use_alt=True).strip()
|
||||||
|
|
||||||
|
summary = div.find(True, attrs={'class':'lmd-sumario-descript'}).find('p')
|
||||||
|
if summary:
|
||||||
|
description = self.tag_to_string(summary, use_alt=False)
|
||||||
|
|
||||||
|
aut = div.find(True, attrs={'class':'lmd-autor-sumario'})
|
||||||
|
if aut:
|
||||||
|
auth = self.tag_to_string(aut, use_alt=False).strip()
|
||||||
|
|
||||||
|
if not articles.has_key(section):
|
||||||
|
articles[section] = []
|
||||||
|
|
||||||
|
articles[section].append(dict(title=title,author=auth,url=url,date=None,description=description,content=''))
|
||||||
|
|
||||||
|
#ans = self.sort_index_by(ans, {'The Front Page':-1, 'Dining In, Dining Out':1, 'Obituaries':2})
|
||||||
|
ans = [(section, articles[section]) for section in ans if articles.has_key(section)]
|
||||||
|
return ans
|
@ -18,7 +18,7 @@ class Fleshbot(BasicNewsRecipe):
|
|||||||
encoding = 'utf-8'
|
encoding = 'utf-8'
|
||||||
use_embedded_content = True
|
use_embedded_content = True
|
||||||
language = 'en'
|
language = 'en'
|
||||||
masthead_url = 'http://cache.gawkerassets.com/assets/kotaku.com/img/logo.png'
|
masthead_url = 'http://fbassets.s3.amazonaws.com/images/uploads/2012/01/fleshbot-logo.png'
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
body{font-family: "Lucida Grande",Helvetica,Arial,sans-serif}
|
body{font-family: "Lucida Grande",Helvetica,Arial,sans-serif}
|
||||||
img{margin-bottom: 1em}
|
img{margin-bottom: 1em}
|
||||||
@ -31,7 +31,7 @@ class Fleshbot(BasicNewsRecipe):
|
|||||||
, 'language' : language
|
, 'language' : language
|
||||||
}
|
}
|
||||||
|
|
||||||
feeds = [(u'Articles', u'http://feeds.gawker.com/fleshbot/vip?format=xml')]
|
feeds = [(u'Articles', u'http://www.fleshbot.com/feed')]
|
||||||
|
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
{'class': 'feedflare'},
|
{'class': 'feedflare'},
|
||||||
|
@ -11,21 +11,21 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
by Chen Wei weichen302@gmx.com, 2012-02-05'''
|
by Chen Wei weichen302@gmx.com, 2012-02-05'''
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__author__ = 'kwetal'
|
__author__ = 'Rick Shang, kwetal'
|
||||||
language = 'en'
|
language = 'en'
|
||||||
version = 1.01
|
version = 1.01
|
||||||
|
|
||||||
title = u'Foreign Affairs (Subcription or (free) Registration)'
|
title = u'Foreign Affairs (Subcription)'
|
||||||
publisher = u'Council on Foreign Relations'
|
publisher = u'Council on Foreign Relations'
|
||||||
category = u'USA, Foreign Affairs'
|
category = u'USA, Foreign Affairs'
|
||||||
description = u'The leading forum for serious discussion of American foreign policy and international affairs.'
|
description = u'The leading forum for serious discussion of American foreign policy and international affairs.'
|
||||||
|
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
|
needs_subscription = True
|
||||||
|
|
||||||
INDEX = 'http://www.foreignaffairs.com'
|
INDEX = 'http://www.foreignaffairs.com'
|
||||||
FRONTPAGE = 'http://www.foreignaffairs.com/magazine'
|
FRONTPAGE = 'http://www.foreignaffairs.com/magazine'
|
||||||
INCLUDE_PREMIUM = False
|
|
||||||
|
|
||||||
|
|
||||||
remove_tags = []
|
remove_tags = []
|
||||||
@ -68,43 +68,57 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
|
|
||||||
answer = []
|
answer = []
|
||||||
soup = self.index_to_soup(self.FRONTPAGE)
|
soup = self.index_to_soup(self.FRONTPAGE)
|
||||||
sec_start = soup.findAll('div', attrs={'class':'panel-separator'})
|
#get dates
|
||||||
|
date = re.split('\s\|\s',self.tag_to_string(soup.head.title.string))[0]
|
||||||
|
self.timefmt = u' [%s]'%date
|
||||||
|
|
||||||
|
sec_start = soup.findAll('div', attrs= {'class':'panel-pane'})
|
||||||
for sec in sec_start:
|
for sec in sec_start:
|
||||||
content = sec.nextSibling
|
articles = []
|
||||||
if content:
|
section = self.tag_to_string(sec.find('h2'))
|
||||||
section = self.tag_to_string(content.find('h2'))
|
if 'Books' in section:
|
||||||
articles = []
|
reviewsection=sec.find('div', attrs = {'class': 'item-list'})
|
||||||
|
for subsection in reviewsection.findAll('div'):
|
||||||
tags = []
|
subsectiontitle=self.tag_to_string(subsection.span.a)
|
||||||
for div in content.findAll('div', attrs = {'class': re.compile(r'view-row\s+views-row-[0-9]+\s+views-row-[odd|even].*')}):
|
subsectionurl=self.INDEX + subsection.span.a['href']
|
||||||
tags.append(div)
|
soup1 = self.index_to_soup(subsectionurl)
|
||||||
for li in content.findAll('li'):
|
for div in soup1.findAll('div', attrs = {'class': 'views-field-title'}):
|
||||||
tags.append(li)
|
if div.find('a') is not None:
|
||||||
|
originalauthor=self.tag_to_string(div.findNext('div', attrs = {'class':'views-field-field-article-book-nid'}).div.a)
|
||||||
for div in tags:
|
title=subsectiontitle+': '+self.tag_to_string(div.span.a)+' by '+originalauthor
|
||||||
title = url = description = author = None
|
url=self.INDEX+div.span.a['href']
|
||||||
|
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||||
if self.INCLUDE_PREMIUM:
|
if atr is not None:
|
||||||
found_premium = False
|
author=self.tag_to_string(atr.span.a)
|
||||||
else:
|
else:
|
||||||
found_premium = div.findAll('span', attrs={'class':
|
author=''
|
||||||
'premium-icon'})
|
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||||
if not found_premium:
|
if desc is not None:
|
||||||
tag = div.find('div', attrs={'class': 'views-field-title'})
|
description=self.tag_to_string(desc.div.p)
|
||||||
|
else:
|
||||||
if tag:
|
description=''
|
||||||
a = tag.find('a')
|
articles.append({'title':title, 'date':None, 'url':url, 'description':description, 'author':author})
|
||||||
if a:
|
subsectiontitle=''
|
||||||
title = self.tag_to_string(a)
|
else:
|
||||||
url = self.INDEX + a['href']
|
for div in sec.findAll('div', attrs = {'class': 'views-field-title'}):
|
||||||
author = self.tag_to_string(div.find('div', attrs = {'class': 'views-field-field-article-display-authors-value'}))
|
if div.find('a') is not None:
|
||||||
tag_summary = div.find('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
title=self.tag_to_string(div.span.a)
|
||||||
description = self.tag_to_string(tag_summary)
|
url=self.INDEX+div.span.a['href']
|
||||||
articles.append({'title':title, 'date':None, 'url':url,
|
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||||
'description':description, 'author':author})
|
if atr is not None:
|
||||||
if articles:
|
author=self.tag_to_string(atr.span.a)
|
||||||
|
else:
|
||||||
|
author=''
|
||||||
|
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||||
|
if desc is not None:
|
||||||
|
description=self.tag_to_string(desc.div.p)
|
||||||
|
else:
|
||||||
|
description=''
|
||||||
|
articles.append({'title':title, 'date':None, 'url':url, 'description':description, 'author':author})
|
||||||
|
if articles:
|
||||||
answer.append((section, articles))
|
answer.append((section, articles))
|
||||||
return answer
|
return answer
|
||||||
|
|
||||||
@ -115,15 +129,17 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
|
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
needs_subscription = True
|
|
||||||
|
|
||||||
def get_browser(self):
|
def get_browser(self):
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
if self.username is not None and self.password is not None:
|
if self.username is not None and self.password is not None:
|
||||||
br.open('https://www.foreignaffairs.com/user?destination=home')
|
br.open('https://www.foreignaffairs.com/user?destination=user%3Fop%3Dlo')
|
||||||
br.select_form(nr = 1)
|
br.select_form(nr = 1)
|
||||||
br['name'] = self.username
|
br['name'] = self.username
|
||||||
br['pass'] = self.password
|
br['pass'] = self.password
|
||||||
br.submit()
|
br.submit()
|
||||||
return br
|
return br
|
||||||
|
|
||||||
|
def cleanup(self):
|
||||||
|
self.browser.open('http://www.foreignaffairs.com/logout?destination=user%3Fop=lo')
|
||||||
|
BIN
recipes/icons/libartes.png
Normal file
BIN
recipes/icons/libartes.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 282 B |
@ -28,12 +28,15 @@ class IlMessaggero(BasicNewsRecipe):
|
|||||||
recursion = 10
|
recursion = 10
|
||||||
|
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
|
extra_css = ' .bianco31lucida{color: black} '
|
||||||
|
|
||||||
|
keep_only_tags = [dict(name='h1', attrs={'class':['titoloLettura2','titoloart','bianco31lucida']}),
|
||||||
keep_only_tags = [dict(name='h1', attrs={'class':'titoloLettura2'}),
|
dict(name='h2', attrs={'class':['sottotitLettura','grigio16']}),
|
||||||
dict(name='h2', attrs={'class':'sottotitLettura'}),
|
dict(name='span', attrs={'class':'testoArticoloG'}),
|
||||||
dict(name='span', attrs={'class':'testoArticoloG'})
|
dict(name='div', attrs={'id':'testodim'})
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
def get_cover_url(self):
|
def get_cover_url(self):
|
||||||
cover = None
|
cover = None
|
||||||
st = time.localtime()
|
st = time.localtime()
|
||||||
@ -55,17 +58,16 @@ class IlMessaggero(BasicNewsRecipe):
|
|||||||
feeds = [
|
feeds = [
|
||||||
(u'HomePage', u'http://www.ilmessaggero.it/rss/home.xml'),
|
(u'HomePage', u'http://www.ilmessaggero.it/rss/home.xml'),
|
||||||
(u'Primo Piano', u'http://www.ilmessaggero.it/rss/initalia_primopiano.xml'),
|
(u'Primo Piano', u'http://www.ilmessaggero.it/rss/initalia_primopiano.xml'),
|
||||||
(u'Cronaca Bianca', u'http://www.ilmessaggero.it/rss/initalia_cronacabianca.xml'),
|
|
||||||
(u'Cronaca Nera', u'http://www.ilmessaggero.it/rss/initalia_cronacanera.xml'),
|
|
||||||
(u'Economia e Finanza', u'http://www.ilmessaggero.it/rss/economia.xml'),
|
(u'Economia e Finanza', u'http://www.ilmessaggero.it/rss/economia.xml'),
|
||||||
(u'Politica', u'http://www.ilmessaggero.it/rss/initalia_politica.xml'),
|
(u'Politica', u'http://www.ilmessaggero.it/rss/initalia_politica.xml'),
|
||||||
(u'Scienza e Tecnologia', u'http://www.ilmessaggero.it/rss/scienza.xml'),
|
(u'Cultura', u'http://www.ilmessaggero.it/rss/cultura.xml'),
|
||||||
(u'Cinema', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'),
|
(u'Tecnologia', u'http://www.ilmessaggero.it/rss/tecnologia.xml'),
|
||||||
(u'Viaggi', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'),
|
(u'Spettacoli', u'http://www.ilmessaggero.it/rss/spettacoli.xml'),
|
||||||
|
(u'Edizioni Locali', u'http://www.ilmessaggero.it/rss/edlocali.xml'),
|
||||||
(u'Roma', u'http://www.ilmessaggero.it/rss/roma.xml'),
|
(u'Roma', u'http://www.ilmessaggero.it/rss/roma.xml'),
|
||||||
(u'Cultura e Tendenze', u'http://www.ilmessaggero.it/rss/roma_culturaspet.xml'),
|
(u'Benessere', u'http://www.ilmessaggero.it/rss/benessere.xml'),
|
||||||
(u'Sport', u'http://www.ilmessaggero.it/rss/sport.xml'),
|
(u'Sport', u'http://www.ilmessaggero.it/rss/sport.xml'),
|
||||||
(u'Calcio', u'http://www.ilmessaggero.it/rss/sport_calcio.xml'),
|
(u'Moda', u'http://www.ilmessaggero.it/rss/moda.xml')
|
||||||
(u'Motori', u'http://www.ilmessaggero.it/rss/sport_motori.xml')
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
69
recipes/libartes.recipe
Normal file
69
recipes/libartes.recipe
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2013, Darko Miletic <darko.miletic at gmail.com>'
|
||||||
|
'''
|
||||||
|
libartes.com
|
||||||
|
'''
|
||||||
|
|
||||||
|
import re
|
||||||
|
from calibre import strftime
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class Libartes(BasicNewsRecipe):
|
||||||
|
title = 'Libartes'
|
||||||
|
__author__ = 'Darko Miletic'
|
||||||
|
description = 'Elektronski časopis Libartes delo je kulturnih entuzijasta, umetnika i teoretičara umetnosti i književnosti. Časopis Libartes izlazi tromesečno i bavi se različitim granama umetnosti - književnošću, muzikom, filmom, likovnim umetnostima, dizajnom i arhitekturom.'
|
||||||
|
publisher = 'Libartes'
|
||||||
|
category = 'literatura, knjizevnost, film, dizajn, arhitektura, muzika'
|
||||||
|
no_stylesheets = True
|
||||||
|
INDEX = 'http://libartes.com/'
|
||||||
|
use_embedded_content = False
|
||||||
|
encoding = 'utf-8'
|
||||||
|
language = 'sr'
|
||||||
|
publication_type = 'magazine'
|
||||||
|
masthead_url = 'http://libartes.com/index_files/logo.gif'
|
||||||
|
extra_css = """
|
||||||
|
@font-face {font-family: "serif1";src:url(res:///opt/sony/ebook/FONT/tt0011m_.ttf)}
|
||||||
|
@font-face {font-family: "sans1";src:url(res:///opt/sony/ebook/FONT/tt0003m_.ttf)}
|
||||||
|
body{font-family: "Times New Roman",Times,serif1, serif}
|
||||||
|
img{display:block}
|
||||||
|
.naslov{font-size: xx-large; font-weight: bold}
|
||||||
|
.nag{font-size: large; font-weight: bold}
|
||||||
|
"""
|
||||||
|
|
||||||
|
conversion_options = {
|
||||||
|
'comment' : description
|
||||||
|
, 'tags' : category
|
||||||
|
, 'publisher' : publisher
|
||||||
|
, 'language' : language
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
preprocess_regexps = [(re.compile(u'\u0110'), lambda match: u'\u00D0')]
|
||||||
|
remove_tags_before = dict(attrs={'id':'nav'})
|
||||||
|
remove_tags_after = dict(attrs={'id':'fb' })
|
||||||
|
keep_only_tags = [dict(name='div', attrs={'id':'center_content'})]
|
||||||
|
remove_tags = [
|
||||||
|
dict(name=['object','link','iframe','embed','meta'])
|
||||||
|
,dict(attrs={'id':'nav'})
|
||||||
|
]
|
||||||
|
|
||||||
|
def parse_index(self):
|
||||||
|
articles = []
|
||||||
|
soup = self.index_to_soup(self.INDEX)
|
||||||
|
for item in soup.findAll(name='a', attrs={'class':'belad'}, href=True):
|
||||||
|
feed_link = item
|
||||||
|
if feed_link['href'].startswith(self.INDEX):
|
||||||
|
url = feed_link['href']
|
||||||
|
else:
|
||||||
|
url = self.INDEX + feed_link['href']
|
||||||
|
|
||||||
|
title = self.tag_to_string(feed_link)
|
||||||
|
date = strftime(self.timefmt)
|
||||||
|
articles.append({
|
||||||
|
'title' :title
|
||||||
|
,'date' :date
|
||||||
|
,'url' :url
|
||||||
|
,'description':''
|
||||||
|
})
|
||||||
|
return [('Casopis Libartes', articles)]
|
||||||
|
|
@ -14,7 +14,8 @@ class LiberoNews(BasicNewsRecipe):
|
|||||||
__author__ = 'Marini Gabriele'
|
__author__ = 'Marini Gabriele'
|
||||||
description = 'Italian daily newspaper'
|
description = 'Italian daily newspaper'
|
||||||
|
|
||||||
cover_url = 'http://www.libero-news.it/images/logo.png'
|
#cover_url = 'http://www.liberoquotidiano.it/images/Libero%20Quotidiano.jpg'
|
||||||
|
cover_url = 'http://www.edicola.liberoquotidiano.it/vnlibero/fpcut.jsp?testata=milano'
|
||||||
title = u'Libero '
|
title = u'Libero '
|
||||||
publisher = 'EDITORIALE LIBERO s.r.l 2006'
|
publisher = 'EDITORIALE LIBERO s.r.l 2006'
|
||||||
category = 'News, politics, culture, economy, general interest'
|
category = 'News, politics, culture, economy, general interest'
|
||||||
@ -32,10 +33,11 @@ class LiberoNews(BasicNewsRecipe):
|
|||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
|
|
||||||
keep_only_tags = [
|
keep_only_tags = [
|
||||||
dict(name='div', attrs={'class':'Articolo'})
|
dict(name='div', attrs={'class':'Articolo'}),
|
||||||
|
dict(name='article')
|
||||||
]
|
]
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
dict(name='div', attrs={'class':['CommentaFoto','Priva2']}),
|
dict(name='div', attrs={'class':['CommentaFoto','Priva2','login_commenti','box_16']}),
|
||||||
dict(name='div', attrs={'id':['commentigenerale']})
|
dict(name='div', attrs={'id':['commentigenerale']})
|
||||||
]
|
]
|
||||||
feeds = [
|
feeds = [
|
||||||
|
@ -1,224 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
##
|
|
||||||
## Title: Microwave and RF
|
|
||||||
##
|
|
||||||
## License: GNU General Public License v3 - http://www.gnu.org/copyleft/gpl.html
|
|
||||||
|
|
||||||
# Feb 2012: Initial release
|
|
||||||
|
|
||||||
__license__ = 'GNU General Public License v3 - http://www.gnu.org/copyleft/gpl.html'
|
|
||||||
'''
|
|
||||||
mwrf.com
|
|
||||||
'''
|
|
||||||
|
|
||||||
import re
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
|
||||||
from calibre.utils.magick import Image
|
|
||||||
|
|
||||||
class Microwaves_and_RF(BasicNewsRecipe):
|
|
||||||
|
|
||||||
Convert_Grayscale = False # Convert images to gray scale or not
|
|
||||||
|
|
||||||
# Add sections that want to be excluded from the magazine
|
|
||||||
exclude_sections = []
|
|
||||||
|
|
||||||
# Add sections that want to be included from the magazine
|
|
||||||
include_sections = []
|
|
||||||
|
|
||||||
title = u'Microwaves and RF'
|
|
||||||
__author__ = u'kiavash'
|
|
||||||
description = u'Microwaves and RF Montly Magazine'
|
|
||||||
publisher = 'Penton Media, Inc.'
|
|
||||||
publication_type = 'magazine'
|
|
||||||
site = 'http://mwrf.com'
|
|
||||||
|
|
||||||
language = 'en'
|
|
||||||
asciiize = True
|
|
||||||
timeout = 120
|
|
||||||
simultaneous_downloads = 1 # very peaky site!
|
|
||||||
|
|
||||||
# Main article is inside this tag
|
|
||||||
keep_only_tags = [dict(name='table', attrs={'id':'prtContent'})]
|
|
||||||
|
|
||||||
no_stylesheets = True
|
|
||||||
remove_javascript = True
|
|
||||||
|
|
||||||
# Flattens all the tables to make it compatible with Nook
|
|
||||||
conversion_options = {'linearize_tables' : True}
|
|
||||||
|
|
||||||
remove_tags = [
|
|
||||||
dict(name='span', attrs={'class':'body12'}),
|
|
||||||
]
|
|
||||||
|
|
||||||
remove_attributes = [ 'border', 'cellspacing', 'align', 'cellpadding', 'colspan',
|
|
||||||
'valign', 'vspace', 'hspace', 'alt', 'width', 'height' ]
|
|
||||||
|
|
||||||
# Specify extra CSS - overrides ALL other CSS (IE. Added last).
|
|
||||||
extra_css = 'body { font-family: verdana, helvetica, sans-serif; } \
|
|
||||||
.introduction, .first { font-weight: bold; } \
|
|
||||||
.cross-head { font-weight: bold; font-size: 125%; } \
|
|
||||||
.cap, .caption { display: block; font-size: 80%; font-style: italic; } \
|
|
||||||
.cap, .caption, .caption img, .caption span { display: block; margin: 5px auto; } \
|
|
||||||
.byl, .byd, .byline img, .byline-name, .byline-title, .author-name, .author-position, \
|
|
||||||
.correspondent-portrait img, .byline-lead-in, .name, .bbc-role { display: block; \
|
|
||||||
font-size: 80%; font-style: italic; margin: 1px auto; } \
|
|
||||||
.story-date, .published { font-size: 80%; } \
|
|
||||||
table { width: 100%; } \
|
|
||||||
td img { display: block; margin: 5px auto; } \
|
|
||||||
ul { padding-top: 10px; } \
|
|
||||||
ol { padding-top: 10px; } \
|
|
||||||
li { padding-top: 5px; padding-bottom: 5px; } \
|
|
||||||
h1 { font-size: 175%; font-weight: bold; } \
|
|
||||||
h2 { font-size: 150%; font-weight: bold; } \
|
|
||||||
h3 { font-size: 125%; font-weight: bold; } \
|
|
||||||
h4, h5, h6 { font-size: 100%; font-weight: bold; }'
|
|
||||||
|
|
||||||
# Remove the line breaks and float left/right and picture width/height.
|
|
||||||
preprocess_regexps = [(re.compile(r'<br[ ]*/>', re.IGNORECASE), lambda m: ''),
|
|
||||||
(re.compile(r'<br[ ]*clear.*/>', re.IGNORECASE), lambda m: ''),
|
|
||||||
(re.compile(r'float:.*?'), lambda m: ''),
|
|
||||||
(re.compile(r'width:.*?px'), lambda m: ''),
|
|
||||||
(re.compile(r'height:.*?px'), lambda m: '')
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def print_version(self, url):
|
|
||||||
url = re.sub(r'.html', '', url)
|
|
||||||
url = re.sub('/ArticleID/.*?/', '/Print.cfm?ArticleID=', url)
|
|
||||||
return url
|
|
||||||
|
|
||||||
# Need to change the user agent to avoid potential download errors
|
|
||||||
def get_browser(self, *args, **kwargs):
|
|
||||||
from calibre import browser
|
|
||||||
kwargs['user_agent'] = 'Mozilla/5.0 (Windows NT 5.1; rv:10.0) Gecko/20100101 Firefox/10.0'
|
|
||||||
return browser(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
|
||||||
|
|
||||||
# Fetches the main page of Microwaves and RF
|
|
||||||
soup = self.index_to_soup(self.site)
|
|
||||||
|
|
||||||
# First page has the ad, Let's find the redirect address.
|
|
||||||
url = soup.find('span', attrs={'class':'commonCopy'}).find('a').get('href')
|
|
||||||
if url.startswith('/'):
|
|
||||||
url = self.site + url
|
|
||||||
|
|
||||||
soup = self.index_to_soup(url)
|
|
||||||
|
|
||||||
# Searches the site for Issue ID link then returns the href address
|
|
||||||
# pointing to the latest issue
|
|
||||||
latest_issue = soup.find('a', attrs={'href':lambda x: x and 'IssueID' in x}).get('href')
|
|
||||||
|
|
||||||
# Fetches the index page for of the latest issue
|
|
||||||
soup = self.index_to_soup(latest_issue)
|
|
||||||
|
|
||||||
# Finds the main section of the page containing cover, issue date and
|
|
||||||
# TOC
|
|
||||||
ts = soup.find('div', attrs={'id':'columnContainer'})
|
|
||||||
|
|
||||||
# Finds the issue date
|
|
||||||
ds = ' '.join(self.tag_to_string(ts.find('span', attrs={'class':'CurrentIssueSectionHead'})).strip().split()[-2:]).capitalize()
|
|
||||||
self.log('Found Current Issue:', ds)
|
|
||||||
self.timefmt = ' [%s]'%ds
|
|
||||||
|
|
||||||
# Finds the cover image
|
|
||||||
cover = ts.find('img', src = lambda x: x and 'Cover' in x)
|
|
||||||
if cover is not None:
|
|
||||||
self.cover_url = self.site + cover['src']
|
|
||||||
self.log('Found Cover image:', self.cover_url)
|
|
||||||
|
|
||||||
feeds = []
|
|
||||||
article_info = []
|
|
||||||
|
|
||||||
# Finds all the articles (tiles and links)
|
|
||||||
articles = ts.findAll('a', attrs={'class':'commonArticleTitle'})
|
|
||||||
|
|
||||||
# Finds all the descriptions
|
|
||||||
descriptions = ts.findAll('span', attrs={'class':'commonCopy'})
|
|
||||||
|
|
||||||
# Find all the sections
|
|
||||||
sections = ts.findAll('span', attrs={'class':'kicker'})
|
|
||||||
|
|
||||||
title_number = 0
|
|
||||||
|
|
||||||
# Goes thru all the articles one by one and sort them out
|
|
||||||
for section in sections:
|
|
||||||
title_number = title_number + 1
|
|
||||||
|
|
||||||
# Removes the unwanted sections
|
|
||||||
if self.tag_to_string(section) in self.exclude_sections:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Only includes the wanted sections
|
|
||||||
if self.include_sections:
|
|
||||||
if self.tag_to_string(section) not in self.include_sections:
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
|
||||||
title = self.tag_to_string(articles[title_number])
|
|
||||||
url = articles[title_number].get('href')
|
|
||||||
if url.startswith('/'):
|
|
||||||
url = self.site + url
|
|
||||||
|
|
||||||
self.log('\tFound article:', title, 'at', url)
|
|
||||||
desc = self.tag_to_string(descriptions[title_number])
|
|
||||||
self.log('\t\t', desc)
|
|
||||||
|
|
||||||
article_info.append({'title':title, 'url':url, 'description':desc,
|
|
||||||
'date':self.timefmt})
|
|
||||||
|
|
||||||
if article_info:
|
|
||||||
feeds.append((self.title, article_info))
|
|
||||||
|
|
||||||
#self.log(feeds)
|
|
||||||
return feeds
|
|
||||||
|
|
||||||
def postprocess_html(self, soup, first):
|
|
||||||
if self.Convert_Grayscale:
|
|
||||||
#process all the images
|
|
||||||
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
|
|
||||||
iurl = tag['src']
|
|
||||||
img = Image()
|
|
||||||
img.open(iurl)
|
|
||||||
if img < 0:
|
|
||||||
raise RuntimeError('Out of memory')
|
|
||||||
img.type = "GrayscaleType"
|
|
||||||
img.save(iurl)
|
|
||||||
return soup
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
|
||||||
|
|
||||||
# Includes all the figures inside the final ebook
|
|
||||||
# Finds all the jpg links
|
|
||||||
for figure in soup.findAll('a', attrs = {'href' : lambda x: x and 'jpg' in x}):
|
|
||||||
|
|
||||||
# makes sure that the link points to the absolute web address
|
|
||||||
if figure['href'].startswith('/'):
|
|
||||||
figure['href'] = self.site + figure['href']
|
|
||||||
|
|
||||||
figure.name = 'img' # converts the links to img
|
|
||||||
figure['src'] = figure['href'] # with the same address as href
|
|
||||||
figure['style'] = 'display:block' # adds /n before and after the image
|
|
||||||
del figure['href']
|
|
||||||
del figure['target']
|
|
||||||
|
|
||||||
# Makes the title standing out
|
|
||||||
for title in soup.findAll('a', attrs = {'class': 'commonSectionTitle'}):
|
|
||||||
title.name = 'h1'
|
|
||||||
del title['href']
|
|
||||||
del title['target']
|
|
||||||
|
|
||||||
# Makes the section name more visible
|
|
||||||
for section_name in soup.findAll('a', attrs = {'class': 'kicker2'}):
|
|
||||||
section_name.name = 'h5'
|
|
||||||
del section_name['href']
|
|
||||||
del section_name['target']
|
|
||||||
|
|
||||||
# Removes all unrelated links
|
|
||||||
for link in soup.findAll('a', attrs = {'href': True}):
|
|
||||||
link.name = 'font'
|
|
||||||
del link['href']
|
|
||||||
del link['target']
|
|
||||||
|
|
||||||
return soup
|
|
@ -66,21 +66,22 @@ class NewYorkReviewOfBooks(BasicNewsRecipe):
|
|||||||
self.log('Issue date:', date)
|
self.log('Issue date:', date)
|
||||||
|
|
||||||
# Find TOC
|
# Find TOC
|
||||||
toc = soup.find('ul', attrs={'class':'issue-article-list'})
|
tocs = soup.findAll('ul', attrs={'class':'issue-article-list'})
|
||||||
articles = []
|
articles = []
|
||||||
for li in toc.findAll('li'):
|
for toc in tocs:
|
||||||
h3 = li.find('h3')
|
for li in toc.findAll('li'):
|
||||||
title = self.tag_to_string(h3)
|
h3 = li.find('h3')
|
||||||
author = self.tag_to_string(li.find('h4'))
|
title = self.tag_to_string(h3)
|
||||||
title = title + u' (%s)'%author
|
author = self.tag_to_string(li.find('h4'))
|
||||||
url = 'http://www.nybooks.com'+h3.find('a', href=True)['href']
|
title = title + u' (%s)'%author
|
||||||
desc = ''
|
url = 'http://www.nybooks.com'+h3.find('a', href=True)['href']
|
||||||
for p in li.findAll('p'):
|
desc = ''
|
||||||
desc += self.tag_to_string(p)
|
for p in li.findAll('p'):
|
||||||
self.log('Found article:', title)
|
desc += self.tag_to_string(p)
|
||||||
self.log('\t', url)
|
self.log('Found article:', title)
|
||||||
self.log('\t', desc)
|
self.log('\t', url)
|
||||||
articles.append({'title':title, 'url':url, 'date':'',
|
self.log('\t', desc)
|
||||||
|
articles.append({'title':title, 'url':url, 'date':'',
|
||||||
'description':desc})
|
'description':desc})
|
||||||
|
|
||||||
return [('Current Issue', articles)]
|
return [('Current Issue', articles)]
|
||||||
|
@ -15,6 +15,7 @@ from calibre.ebooks.BeautifulSoup import BeautifulSoup, Tag, BeautifulStoneSoup
|
|||||||
class NYTimes(BasicNewsRecipe):
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
recursions=1 # set this to zero to omit Related articles lists
|
recursions=1 # set this to zero to omit Related articles lists
|
||||||
|
match_regexps=[r'/[12][0-9][0-9][0-9]/[0-9]+/'] # speeds up processing by preventing index page links from being followed
|
||||||
|
|
||||||
# set getTechBlogs to True to include the technology blogs
|
# set getTechBlogs to True to include the technology blogs
|
||||||
# set tech_oldest_article to control article age
|
# set tech_oldest_article to control article age
|
||||||
@ -24,6 +25,14 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
tech_oldest_article = 14
|
tech_oldest_article = 14
|
||||||
tech_max_articles_per_feed = 25
|
tech_max_articles_per_feed = 25
|
||||||
|
|
||||||
|
# set getPopularArticles to False if you don't want the Most E-mailed and Most Viewed articles
|
||||||
|
# otherwise you will get up to 20 of the most popular e-mailed and viewed articles (in each category)
|
||||||
|
getPopularArticles = True
|
||||||
|
popularPeriod = '1' # set this to the number of days to include in the measurement
|
||||||
|
# e.g. 7 will get the most popular measured over the last 7 days
|
||||||
|
# and 30 will get the most popular measured over 30 days.
|
||||||
|
# you still only get up to 20 articles in each category
|
||||||
|
|
||||||
|
|
||||||
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
||||||
headlinesOnly = True
|
headlinesOnly = True
|
||||||
@ -376,6 +385,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
||||||
|
|
||||||
|
|
||||||
def short_title(self):
|
def short_title(self):
|
||||||
return self.title
|
return self.title
|
||||||
|
|
||||||
@ -384,6 +394,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
import copy
|
import copy
|
||||||
from calibre.ebooks.chardet import xml_to_unicode
|
from calibre.ebooks.chardet import xml_to_unicode
|
||||||
|
print("ARTICLE_TO_SOUP "+url_or_raw)
|
||||||
if re.match(r'\w+://', url_or_raw):
|
if re.match(r'\w+://', url_or_raw):
|
||||||
br = self.clone_browser(self.browser)
|
br = self.clone_browser(self.browser)
|
||||||
open_func = getattr(br, 'open_novisit', br.open)
|
open_func = getattr(br, 'open_novisit', br.open)
|
||||||
@ -475,6 +486,67 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
description=description, author=author,
|
description=description, author=author,
|
||||||
content=''))
|
content=''))
|
||||||
|
|
||||||
|
def get_popular_articles(self,ans):
|
||||||
|
if self.getPopularArticles:
|
||||||
|
popular_articles = {}
|
||||||
|
key_list = []
|
||||||
|
|
||||||
|
def handleh3(h3tag):
|
||||||
|
try:
|
||||||
|
url = h3tag.a['href']
|
||||||
|
except:
|
||||||
|
return ('','','','')
|
||||||
|
url = re.sub(r'\?.*', '', url)
|
||||||
|
if self.exclude_url(url):
|
||||||
|
return ('','','','')
|
||||||
|
url += '?pagewanted=all'
|
||||||
|
title = self.tag_to_string(h3tag.a,False)
|
||||||
|
h6tag = h3tag.findNextSibling('h6')
|
||||||
|
if h6tag is not None:
|
||||||
|
author = self.tag_to_string(h6tag,False)
|
||||||
|
else:
|
||||||
|
author = ''
|
||||||
|
ptag = h3tag.findNextSibling('p')
|
||||||
|
if ptag is not None:
|
||||||
|
desc = self.tag_to_string(ptag,False)
|
||||||
|
else:
|
||||||
|
desc = ''
|
||||||
|
return(title,url,author,desc)
|
||||||
|
|
||||||
|
|
||||||
|
have_emailed = False
|
||||||
|
emailed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-emailed?period='+self.popularPeriod)
|
||||||
|
for h3tag in emailed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_emailed:
|
||||||
|
key_list.append('Most E-Mailed')
|
||||||
|
popular_articles['Most E-Mailed'] = []
|
||||||
|
have_emailed = True
|
||||||
|
popular_articles['Most E-Mailed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
have_viewed = False
|
||||||
|
viewed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-viewed?period='+self.popularPeriod)
|
||||||
|
for h3tag in viewed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_viewed:
|
||||||
|
key_list.append('Most Viewed')
|
||||||
|
popular_articles['Most Viewed'] = []
|
||||||
|
have_viewed = True
|
||||||
|
popular_articles['Most Viewed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
viewed_ans = [(k, popular_articles[k]) for k in key_list if popular_articles.has_key(k)]
|
||||||
|
for x in viewed_ans:
|
||||||
|
ans.append(x)
|
||||||
|
return ans
|
||||||
|
|
||||||
def get_tech_feeds(self,ans):
|
def get_tech_feeds(self,ans):
|
||||||
if self.getTechBlogs:
|
if self.getTechBlogs:
|
||||||
tech_articles = {}
|
tech_articles = {}
|
||||||
@ -536,7 +608,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
|
|
||||||
def parse_todays_index(self):
|
def parse_todays_index(self):
|
||||||
@ -569,7 +641,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_headline_index(self):
|
def parse_headline_index(self):
|
||||||
|
|
||||||
@ -643,7 +715,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
if self.headlinesOnly:
|
if self.headlinesOnly:
|
||||||
@ -731,7 +803,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
#print("PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
#print(strftime("%H:%M:%S")+" -- PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
skip_tag = soup.find(True, {'name':'skip'})
|
skip_tag = soup.find(True, {'name':'skip'})
|
||||||
if skip_tag is not None:
|
if skip_tag is not None:
|
||||||
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
||||||
@ -907,6 +979,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
for aside in soup.findAll('div','aside'):
|
for aside in soup.findAll('div','aside'):
|
||||||
aside.extract()
|
aside.extract()
|
||||||
soup = self.strip_anchors(soup,True)
|
soup = self.strip_anchors(soup,True)
|
||||||
|
#print("RECURSIVE: "+self.tag_to_string(soup.title))
|
||||||
|
|
||||||
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
||||||
if first_fetch:
|
if first_fetch:
|
||||||
@ -1071,7 +1144,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
divTag.replaceWith(tag)
|
divTag.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
||||||
|
#print(strftime("%H:%M:%S")+" -- POSTPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
def populate_article_metadata(self, article, soup, first):
|
def populate_article_metadata(self, article, soup, first):
|
||||||
|
@ -15,6 +15,7 @@ from calibre.ebooks.BeautifulSoup import BeautifulSoup, Tag, BeautifulStoneSoup
|
|||||||
class NYTimes(BasicNewsRecipe):
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
recursions=1 # set this to zero to omit Related articles lists
|
recursions=1 # set this to zero to omit Related articles lists
|
||||||
|
match_regexps=[r'/[12][0-9][0-9][0-9]/[0-9]+/'] # speeds up processing by preventing index page links from being followed
|
||||||
|
|
||||||
# set getTechBlogs to True to include the technology blogs
|
# set getTechBlogs to True to include the technology blogs
|
||||||
# set tech_oldest_article to control article age
|
# set tech_oldest_article to control article age
|
||||||
@ -24,6 +25,14 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
tech_oldest_article = 14
|
tech_oldest_article = 14
|
||||||
tech_max_articles_per_feed = 25
|
tech_max_articles_per_feed = 25
|
||||||
|
|
||||||
|
# set getPopularArticles to False if you don't want the Most E-mailed and Most Viewed articles
|
||||||
|
# otherwise you will get up to 20 of the most popular e-mailed and viewed articles (in each category)
|
||||||
|
getPopularArticles = True
|
||||||
|
popularPeriod = '1' # set this to the number of days to include in the measurement
|
||||||
|
# e.g. 7 will get the most popular measured over the last 7 days
|
||||||
|
# and 30 will get the most popular measured over 30 days.
|
||||||
|
# you still only get up to 20 articles in each category
|
||||||
|
|
||||||
|
|
||||||
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
||||||
headlinesOnly = False
|
headlinesOnly = False
|
||||||
@ -115,19 +124,19 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if headlinesOnly:
|
if headlinesOnly:
|
||||||
title='New York Times Headlines'
|
title='New York Times Headlines'
|
||||||
description = 'Headlines from the New York Times'
|
description = 'Headlines from the New York Times'
|
||||||
needs_subscription = False
|
needs_subscription = True
|
||||||
elif webEdition:
|
elif webEdition:
|
||||||
title='New York Times (Web)'
|
title='New York Times (Web)'
|
||||||
description = 'New York Times on the Web'
|
description = 'New York Times on the Web'
|
||||||
needs_subscription = False
|
needs_subscription = True
|
||||||
elif replaceKindleVersion:
|
elif replaceKindleVersion:
|
||||||
title='The New York Times'
|
title='The New York Times'
|
||||||
description = 'Today\'s New York Times'
|
description = 'Today\'s New York Times'
|
||||||
needs_subscription = False
|
needs_subscription = True
|
||||||
else:
|
else:
|
||||||
title='New York Times'
|
title='New York Times'
|
||||||
description = 'Today\'s New York Times'
|
description = 'Today\'s New York Times'
|
||||||
needs_subscription = False
|
needs_subscription = True
|
||||||
|
|
||||||
def decode_url_date(self,url):
|
def decode_url_date(self,url):
|
||||||
urlitems = url.split('/')
|
urlitems = url.split('/')
|
||||||
@ -350,6 +359,14 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
def get_browser(self):
|
def get_browser(self):
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
if self.username is not None and self.password is not None:
|
||||||
|
br.open('http://www.nytimes.com/auth/login')
|
||||||
|
br.form = br.forms().next()
|
||||||
|
br['userid'] = self.username
|
||||||
|
br['password'] = self.password
|
||||||
|
raw = br.submit().read()
|
||||||
|
if 'Please try again' in raw:
|
||||||
|
raise Exception('Your username and password are incorrect')
|
||||||
return br
|
return br
|
||||||
|
|
||||||
cover_tag = 'NY_NYT'
|
cover_tag = 'NY_NYT'
|
||||||
@ -376,6 +393,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
||||||
|
|
||||||
|
|
||||||
def short_title(self):
|
def short_title(self):
|
||||||
return self.title
|
return self.title
|
||||||
|
|
||||||
@ -384,6 +402,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
import copy
|
import copy
|
||||||
from calibre.ebooks.chardet import xml_to_unicode
|
from calibre.ebooks.chardet import xml_to_unicode
|
||||||
|
print("ARTICLE_TO_SOUP "+url_or_raw)
|
||||||
if re.match(r'\w+://', url_or_raw):
|
if re.match(r'\w+://', url_or_raw):
|
||||||
br = self.clone_browser(self.browser)
|
br = self.clone_browser(self.browser)
|
||||||
open_func = getattr(br, 'open_novisit', br.open)
|
open_func = getattr(br, 'open_novisit', br.open)
|
||||||
@ -475,6 +494,67 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
description=description, author=author,
|
description=description, author=author,
|
||||||
content=''))
|
content=''))
|
||||||
|
|
||||||
|
def get_popular_articles(self,ans):
|
||||||
|
if self.getPopularArticles:
|
||||||
|
popular_articles = {}
|
||||||
|
key_list = []
|
||||||
|
|
||||||
|
def handleh3(h3tag):
|
||||||
|
try:
|
||||||
|
url = h3tag.a['href']
|
||||||
|
except:
|
||||||
|
return ('','','','')
|
||||||
|
url = re.sub(r'\?.*', '', url)
|
||||||
|
if self.exclude_url(url):
|
||||||
|
return ('','','','')
|
||||||
|
url += '?pagewanted=all'
|
||||||
|
title = self.tag_to_string(h3tag.a,False)
|
||||||
|
h6tag = h3tag.findNextSibling('h6')
|
||||||
|
if h6tag is not None:
|
||||||
|
author = self.tag_to_string(h6tag,False)
|
||||||
|
else:
|
||||||
|
author = ''
|
||||||
|
ptag = h3tag.findNextSibling('p')
|
||||||
|
if ptag is not None:
|
||||||
|
desc = self.tag_to_string(ptag,False)
|
||||||
|
else:
|
||||||
|
desc = ''
|
||||||
|
return(title,url,author,desc)
|
||||||
|
|
||||||
|
|
||||||
|
have_emailed = False
|
||||||
|
emailed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-emailed?period='+self.popularPeriod)
|
||||||
|
for h3tag in emailed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_emailed:
|
||||||
|
key_list.append('Most E-Mailed')
|
||||||
|
popular_articles['Most E-Mailed'] = []
|
||||||
|
have_emailed = True
|
||||||
|
popular_articles['Most E-Mailed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
have_viewed = False
|
||||||
|
viewed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-viewed?period='+self.popularPeriod)
|
||||||
|
for h3tag in viewed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_viewed:
|
||||||
|
key_list.append('Most Viewed')
|
||||||
|
popular_articles['Most Viewed'] = []
|
||||||
|
have_viewed = True
|
||||||
|
popular_articles['Most Viewed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
viewed_ans = [(k, popular_articles[k]) for k in key_list if popular_articles.has_key(k)]
|
||||||
|
for x in viewed_ans:
|
||||||
|
ans.append(x)
|
||||||
|
return ans
|
||||||
|
|
||||||
def get_tech_feeds(self,ans):
|
def get_tech_feeds(self,ans):
|
||||||
if self.getTechBlogs:
|
if self.getTechBlogs:
|
||||||
tech_articles = {}
|
tech_articles = {}
|
||||||
@ -536,7 +616,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
|
|
||||||
def parse_todays_index(self):
|
def parse_todays_index(self):
|
||||||
@ -569,7 +649,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_headline_index(self):
|
def parse_headline_index(self):
|
||||||
|
|
||||||
@ -643,7 +723,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
if self.headlinesOnly:
|
if self.headlinesOnly:
|
||||||
@ -731,7 +811,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
#print("PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
#print(strftime("%H:%M:%S")+" -- PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
skip_tag = soup.find(True, {'name':'skip'})
|
skip_tag = soup.find(True, {'name':'skip'})
|
||||||
if skip_tag is not None:
|
if skip_tag is not None:
|
||||||
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
||||||
@ -907,6 +987,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
for aside in soup.findAll('div','aside'):
|
for aside in soup.findAll('div','aside'):
|
||||||
aside.extract()
|
aside.extract()
|
||||||
soup = self.strip_anchors(soup,True)
|
soup = self.strip_anchors(soup,True)
|
||||||
|
#print("RECURSIVE: "+self.tag_to_string(soup.title))
|
||||||
|
|
||||||
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
||||||
if first_fetch:
|
if first_fetch:
|
||||||
@ -1071,7 +1152,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
divTag.replaceWith(tag)
|
divTag.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
||||||
|
#print(strftime("%H:%M:%S")+" -- POSTPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
def populate_article_metadata(self, article, soup, first):
|
def populate_article_metadata(self, article, soup, first):
|
||||||
|
65
recipes/outside_magazine.recipe
Normal file
65
recipes/outside_magazine.recipe
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
|
||||||
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
|
title = 'Outside Magazine'
|
||||||
|
__author__ = 'Krittika Goyal'
|
||||||
|
description = 'Outside Magazine - Free 1 Month Old Issue'
|
||||||
|
timefmt = ' [%d %b, %Y]'
|
||||||
|
needs_subscription = False
|
||||||
|
language = 'en'
|
||||||
|
|
||||||
|
no_stylesheets = True
|
||||||
|
#auto_cleanup = True
|
||||||
|
#auto_cleanup_keep = '//div[@class="thumbnail"]'
|
||||||
|
|
||||||
|
keep_only_tags = dict(name='div', attrs={'class':'masonry-box width-four'})
|
||||||
|
remove_tags = [
|
||||||
|
dict(name='div', attrs={'id':['share-bar', 'outbrain_widget_0', 'outbrain_widget_1', 'livefyre']}),
|
||||||
|
#dict(name='div', attrs={'id':['qrformdiv', 'inSection', 'alpha-inner']}),
|
||||||
|
#dict(name='form', attrs={'onsubmit':''}),
|
||||||
|
dict(name='section', attrs={'id':['article-quote', 'article-navigation']}),
|
||||||
|
]
|
||||||
|
#TO GET ARTICLE TOC
|
||||||
|
def out_get_index(self):
|
||||||
|
super_url = 'http://www.outsideonline.com/magazine/'
|
||||||
|
super_soup = self.index_to_soup(super_url)
|
||||||
|
div = super_soup.find(attrs={'class':'masonry-box width-four'})
|
||||||
|
issue = div.findAll(name='article')[1]
|
||||||
|
super_a = issue.find('a', href=True)
|
||||||
|
return super_a.get('href')
|
||||||
|
|
||||||
|
|
||||||
|
# To parse artice toc
|
||||||
|
def parse_index(self):
|
||||||
|
parse_soup = self.index_to_soup(self.out_get_index())
|
||||||
|
|
||||||
|
feeds = []
|
||||||
|
feed_title = 'Articles'
|
||||||
|
|
||||||
|
articles = []
|
||||||
|
self.log('Found section:', feed_title)
|
||||||
|
div = parse_soup.find(attrs={'class':'print clearfix'})
|
||||||
|
for art in div.findAll(name='p'):
|
||||||
|
art_info = art.find(name = 'a')
|
||||||
|
if art_info is None:
|
||||||
|
continue
|
||||||
|
art_title = self.tag_to_string(art_info)
|
||||||
|
url = art_info.get('href') + '?page=all'
|
||||||
|
self.log.info('\tFound article:', art_title, 'at', url)
|
||||||
|
article = {'title':art_title, 'url':url, 'date':''}
|
||||||
|
#au = art.find(attrs={'class':'articleAuthors'})
|
||||||
|
#if au is not None:
|
||||||
|
#article['author'] = self.tag_to_string(au)
|
||||||
|
#desc = art.find(attrs={'class':'hover_text'})
|
||||||
|
#if desc is not None:
|
||||||
|
#desc = self.tag_to_string(desc)
|
||||||
|
#if 'author' in article:
|
||||||
|
#desc = ' by ' + article['author'] + ' ' +desc
|
||||||
|
#article['description'] = desc
|
||||||
|
articles.append(article)
|
||||||
|
if articles:
|
||||||
|
feeds.append((feed_title, articles))
|
||||||
|
|
||||||
|
return feeds
|
||||||
|
|
22
recipes/oxford_mail.recipe
Normal file
22
recipes/oxford_mail.recipe
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class HindustanTimes(BasicNewsRecipe):
|
||||||
|
title = u'Oxford Mail'
|
||||||
|
language = 'en_GB'
|
||||||
|
__author__ = 'Krittika Goyal'
|
||||||
|
oldest_article = 1 #days
|
||||||
|
max_articles_per_feed = 25
|
||||||
|
#encoding = 'cp1252'
|
||||||
|
use_embedded_content = False
|
||||||
|
|
||||||
|
no_stylesheets = True
|
||||||
|
auto_cleanup = True
|
||||||
|
|
||||||
|
|
||||||
|
feeds = [
|
||||||
|
('News',
|
||||||
|
'http://www.oxfordmail.co.uk/news/rss/'),
|
||||||
|
('Sports',
|
||||||
|
'http://www.oxfordmail.co.uk/sport/rss/'),
|
||||||
|
]
|
||||||
|
|
@ -6,7 +6,6 @@ class PhilosophyNow(BasicNewsRecipe):
|
|||||||
|
|
||||||
title = 'Philosophy Now'
|
title = 'Philosophy Now'
|
||||||
__author__ = 'Rick Shang'
|
__author__ = 'Rick Shang'
|
||||||
|
|
||||||
description = '''Philosophy Now is a lively magazine for everyone
|
description = '''Philosophy Now is a lively magazine for everyone
|
||||||
interested in ideas. It isn't afraid to tackle all the major questions of
|
interested in ideas. It isn't afraid to tackle all the major questions of
|
||||||
life, the universe and everything. Published every two months, it tries to
|
life, the universe and everything. Published every two months, it tries to
|
||||||
@ -27,7 +26,7 @@ class PhilosophyNow(BasicNewsRecipe):
|
|||||||
def get_browser(self):
|
def get_browser(self):
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
br.open('https://philosophynow.org/auth/login')
|
br.open('https://philosophynow.org/auth/login')
|
||||||
br.select_form(nr = 1)
|
br.select_form(name="loginForm")
|
||||||
br['username'] = self.username
|
br['username'] = self.username
|
||||||
br['password'] = self.password
|
br['password'] = self.password
|
||||||
br.submit()
|
br.submit()
|
||||||
@ -50,19 +49,20 @@ class PhilosophyNow(BasicNewsRecipe):
|
|||||||
#Go to the main body
|
#Go to the main body
|
||||||
current_issue_url = 'http://philosophynow.org/issues/' + issuenum
|
current_issue_url = 'http://philosophynow.org/issues/' + issuenum
|
||||||
soup = self.index_to_soup(current_issue_url)
|
soup = self.index_to_soup(current_issue_url)
|
||||||
div = soup.find ('div', attrs={'class':'articlesColumn'})
|
div = soup.find ('div', attrs={'class':'contentsColumn'})
|
||||||
|
|
||||||
feeds = OrderedDict()
|
feeds = OrderedDict()
|
||||||
|
|
||||||
for post in div.findAll('h3'):
|
|
||||||
|
for post in div.findAll('h1'):
|
||||||
articles = []
|
articles = []
|
||||||
a=post.find('a',href=True)
|
a=post.find('a',href=True)
|
||||||
if a is not None:
|
if a is not None:
|
||||||
url="http://philosophynow.org" + a['href']
|
url="http://philosophynow.org" + a['href']
|
||||||
title=self.tag_to_string(a).strip()
|
title=self.tag_to_string(a).strip()
|
||||||
s=post.findPrevious('h4')
|
s=post.findPrevious('h3')
|
||||||
section_title = self.tag_to_string(s).strip()
|
section_title = self.tag_to_string(s).strip()
|
||||||
d=post.findNext('p')
|
d=post.findNext('h2')
|
||||||
desc = self.tag_to_string(d).strip()
|
desc = self.tag_to_string(d).strip()
|
||||||
articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
|
articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
|
||||||
|
|
||||||
@ -73,3 +73,5 @@ class PhilosophyNow(BasicNewsRecipe):
|
|||||||
ans = [(key, val) for key, val in feeds.iteritems()]
|
ans = [(key, val) for key, val in feeds.iteritems()]
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
|
def cleanup(self):
|
||||||
|
self.browser.open('http://philosophynow.org/auth/logout')
|
||||||
|
13
recipes/schattenblick.recipe
Normal file
13
recipes/schattenblick.recipe
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1345802300(BasicNewsRecipe):
|
||||||
|
title = u'Online-Zeitung Schattenblick'
|
||||||
|
language = 'de'
|
||||||
|
__author__ = 'ThB'
|
||||||
|
publisher = u'MA-Verlag'
|
||||||
|
category = u'Nachrichten'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
cover_url = 'http://www.schattenblick.de/mobi/rss/cover.jpg'
|
||||||
|
feeds = [(u'Schattenblick Tagesausgabe', u'http://www.schattenblick.de/mobi/rss/rss.xml')]
|
||||||
|
|
@ -48,10 +48,14 @@ class Smithsonian(BasicNewsRecipe):
|
|||||||
link=post.find('a',href=True)
|
link=post.find('a',href=True)
|
||||||
url=link['href']+'?c=y&story=fullstory'
|
url=link['href']+'?c=y&story=fullstory'
|
||||||
if subsection is not None:
|
if subsection is not None:
|
||||||
subsection_title = self.tag_to_string(subsection)
|
subsection_title = self.tag_to_string(subsection).strip()
|
||||||
prefix = (subsection_title+': ')
|
prefix = (subsection_title+': ')
|
||||||
description=self.tag_to_string(post('p', limit=2)[1]).strip()
|
description=self.tag_to_string(post('p', limit=2)[1]).strip()
|
||||||
else:
|
else:
|
||||||
|
if post.find('img') is not None:
|
||||||
|
subsection_title = self.tag_to_string(post.findPrevious('div', attrs={'class':'departments plainModule'}).find('p', attrs={'class':'article-cat'})).strip()
|
||||||
|
prefix = (subsection_title+': ')
|
||||||
|
|
||||||
description=self.tag_to_string(post.find('p')).strip()
|
description=self.tag_to_string(post.find('p')).strip()
|
||||||
desc=re.sub('\sBy\s.*', '', description, re.DOTALL)
|
desc=re.sub('\sBy\s.*', '', description, re.DOTALL)
|
||||||
author=re.sub('.*By\s', '', description, re.DOTALL)
|
author=re.sub('.*By\s', '', description, re.DOTALL)
|
||||||
@ -64,4 +68,3 @@ class Smithsonian(BasicNewsRecipe):
|
|||||||
feeds[section_title] += articles
|
feeds[section_title] += articles
|
||||||
ans = [(key, val) for key, val in feeds.iteritems()]
|
ans = [(key, val) for key, val in feeds.iteritems()]
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
|
60
recipes/spectator_magazine.recipe
Normal file
60
recipes/spectator_magazine.recipe
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
|
||||||
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
|
title = 'Spectator Magazine'
|
||||||
|
__author__ = 'Krittika Goyal'
|
||||||
|
description = 'Magazine'
|
||||||
|
timefmt = ' [%d %b, %Y]'
|
||||||
|
needs_subscription = False
|
||||||
|
language = 'en'
|
||||||
|
|
||||||
|
no_stylesheets = True
|
||||||
|
#auto_cleanup = True
|
||||||
|
#auto_cleanup_keep = '//div[@class="thumbnail"]'
|
||||||
|
|
||||||
|
keep_only_tags = dict(name='div', attrs={'id':'content'})
|
||||||
|
remove_tags = [
|
||||||
|
dict(name='div', attrs={'id':['disqus_thread']}),
|
||||||
|
##dict(name='div', attrs={'id':['qrformdiv', 'inSection', 'alpha-inner']}),
|
||||||
|
##dict(name='form', attrs={'onsubmit':''}),
|
||||||
|
#dict(name='section', attrs={'id':['article-quote', 'article-navigation']}),
|
||||||
|
]
|
||||||
|
|
||||||
|
#TO GET ARTICLE TOC
|
||||||
|
def spec_get_index(self):
|
||||||
|
return self.index_to_soup('http://www.spectator.co.uk/')
|
||||||
|
|
||||||
|
# To parse artice toc
|
||||||
|
def parse_index(self):
|
||||||
|
parse_soup = self.index_to_soup('http://www.spectator.co.uk/')
|
||||||
|
|
||||||
|
feeds = []
|
||||||
|
feed_title = 'Spectator Magazine Articles'
|
||||||
|
|
||||||
|
articles = []
|
||||||
|
self.log('Found section:', feed_title)
|
||||||
|
div = parse_soup.find(attrs={'class':'one-col-tax-widget magazine-list columns-1 post-8 taxonomy-category full-width widget section-widget icit-taxonomical-listings'})
|
||||||
|
for art in div.findAll(name='h2'):
|
||||||
|
art_info = art.find(name = 'a')
|
||||||
|
if art_info is None:
|
||||||
|
continue
|
||||||
|
art_title = self.tag_to_string(art_info)
|
||||||
|
url = art_info.get('href')
|
||||||
|
self.log.info('\tFound article:', art_title, 'at', url)
|
||||||
|
article = {'title':art_title, 'url':url, 'date':''}
|
||||||
|
#au = art.find(attrs={'class':'articleAuthors'})
|
||||||
|
#if au is not None:
|
||||||
|
#article['author'] = self.tag_to_string(au)
|
||||||
|
#desc = art.find(attrs={'class':'hover_text'})
|
||||||
|
#if desc is not None:
|
||||||
|
#desc = self.tag_to_string(desc)
|
||||||
|
#if 'author' in article:
|
||||||
|
#desc = ' by ' + article['author'] + ' ' +desc
|
||||||
|
#article['description'] = desc
|
||||||
|
articles.append(article)
|
||||||
|
if articles:
|
||||||
|
feeds.append((feed_title, articles))
|
||||||
|
|
||||||
|
return feeds
|
||||||
|
|
@ -1,13 +1,16 @@
|
|||||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__copyright__ = '2012, Andreas Zeiser <andreas.zeiser@web.de>'
|
__copyright__ = '2012, 2013 Andreas Zeiser <andreas.zeiser@web.de>'
|
||||||
'''
|
'''
|
||||||
szmobil.sueddeutsche.de/
|
szmobil.sueddeutsche.de/
|
||||||
'''
|
'''
|
||||||
|
# History
|
||||||
|
# 2013.01.09 Fixed bugs in article titles containing "strong" and
|
||||||
|
# other small changes
|
||||||
|
# 2012.08.04 Initial release
|
||||||
|
|
||||||
from calibre import strftime
|
from calibre import strftime
|
||||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
import re
|
import re
|
||||||
|
|
||||||
class SZmobil(BasicNewsRecipe):
|
class SZmobil(BasicNewsRecipe):
|
||||||
title = u'Süddeutsche Zeitung mobil'
|
title = u'Süddeutsche Zeitung mobil'
|
||||||
@ -26,6 +29,8 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
delay = 1
|
delay = 1
|
||||||
cover_source = 'http://www.sueddeutsche.de/verlag'
|
cover_source = 'http://www.sueddeutsche.de/verlag'
|
||||||
|
|
||||||
|
# if you want to get rid of the date on the title page use
|
||||||
|
# timefmt = ''
|
||||||
timefmt = ' [%a, %d %b, %Y]'
|
timefmt = ' [%a, %d %b, %Y]'
|
||||||
|
|
||||||
root_url ='http://szmobil.sueddeutsche.de/'
|
root_url ='http://szmobil.sueddeutsche.de/'
|
||||||
@ -50,7 +55,7 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
|
|
||||||
return browser
|
return browser
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
# find all sections
|
# find all sections
|
||||||
src = self.index_to_soup('http://szmobil.sueddeutsche.de')
|
src = self.index_to_soup('http://szmobil.sueddeutsche.de')
|
||||||
feeds = []
|
feeds = []
|
||||||
@ -76,10 +81,10 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
# first check if link is a special article in section "Meinungsseite"
|
# first check if link is a special article in section "Meinungsseite"
|
||||||
if itt.find('strong')!= None:
|
if itt.find('strong')!= None:
|
||||||
article_name = itt.strong.string
|
article_name = itt.strong.string
|
||||||
article_shorttitle = itt.contents[1]
|
if len(itt.contents)>1:
|
||||||
|
shorttitles[article_id] = itt.contents[1]
|
||||||
|
|
||||||
articles.append( (article_name, article_url, article_id) )
|
articles.append( (article_name, article_url, article_id) )
|
||||||
shorttitles[article_id] = article_shorttitle
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
||||||
@ -89,7 +94,7 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
else:
|
else:
|
||||||
article_name = itt.string
|
article_name = itt.string
|
||||||
|
|
||||||
if (article_name[0:10] == " mehr"):
|
if (article_name.find(" mehr") == 0):
|
||||||
# just another link ("mehr") to an article
|
# just another link ("mehr") to an article
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@ -102,7 +107,9 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
for article_name, article_url, article_id in articles:
|
for article_name, article_url, article_id in articles:
|
||||||
url = self.root_url + article_url
|
url = self.root_url + article_url
|
||||||
title = article_name
|
title = article_name
|
||||||
pubdate = strftime('%a, %d %b')
|
# if you want to get rid of date for each article use
|
||||||
|
# pubdate = strftime('')
|
||||||
|
pubdate = strftime('[%a, %d %b]')
|
||||||
description = ''
|
description = ''
|
||||||
if shorttitles.has_key(article_id):
|
if shorttitles.has_key(article_id):
|
||||||
description = shorttitles[article_id]
|
description = shorttitles[article_id]
|
||||||
@ -115,3 +122,4 @@ class SZmobil(BasicNewsRecipe):
|
|||||||
|
|
||||||
return all_articles
|
return all_articles
|
||||||
|
|
||||||
|
|
||||||
|
@ -16,8 +16,9 @@ class TidBITS(BasicNewsRecipe):
|
|||||||
oldest_article = 2
|
oldest_article = 2
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
|
#auto_cleanup = True
|
||||||
encoding = 'utf-8'
|
encoding = 'utf-8'
|
||||||
use_embedded_content = True
|
use_embedded_content = False
|
||||||
language = 'en'
|
language = 'en'
|
||||||
remove_empty_feeds = True
|
remove_empty_feeds = True
|
||||||
masthead_url = 'http://db.tidbits.com/images/tblogo9.gif'
|
masthead_url = 'http://db.tidbits.com/images/tblogo9.gif'
|
||||||
@ -30,9 +31,11 @@ class TidBITS(BasicNewsRecipe):
|
|||||||
, 'language' : language
|
, 'language' : language
|
||||||
}
|
}
|
||||||
|
|
||||||
remove_attributes = ['width','height']
|
#remove_attributes = ['width','height']
|
||||||
remove_tags = [dict(name='small')]
|
#remove_tags = [dict(name='small')]
|
||||||
remove_tags_after = dict(name='small')
|
#remove_tags_after = dict(name='small')
|
||||||
|
keep_only_tags = [dict(name='div', attrs={'id':'center_ajax_sub'})]
|
||||||
|
remove_tags = [dict(name='div', attrs={'id':'social-media'})]
|
||||||
|
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'Business Apps' , u'http://db.tidbits.com/feeds/business.rss' )
|
(u'Business Apps' , u'http://db.tidbits.com/feeds/business.rss' )
|
||||||
|
@ -26,28 +26,33 @@ class TodaysZaman_en(BasicNewsRecipe):
|
|||||||
# remove_attributes = ['width','height']
|
# remove_attributes = ['width','height']
|
||||||
|
|
||||||
feeds = [
|
feeds = [
|
||||||
( u'Home', u'http://www.todayszaman.com/rss?sectionId=0'),
|
( u'Home', u'http://www.todayszaman.com/0.rss'),
|
||||||
( u'News', u'http://www.todayszaman.com/rss?sectionId=100'),
|
( u'Sports', u'http://www.todayszaman.com/5.rss'),
|
||||||
( u'Business', u'http://www.todayszaman.com/rss?sectionId=105'),
|
( u'Columnists', u'http://www.todayszaman.com/6.rss'),
|
||||||
( u'Interviews', u'http://www.todayszaman.com/rss?sectionId=8'),
|
( u'Interviews', u'http://www.todayszaman.com/9.rss'),
|
||||||
( u'Columnists', u'http://www.todayszaman.com/rss?sectionId=6'),
|
( u'News', u'http://www.todayszaman.com/100.rss'),
|
||||||
( u'Op-Ed', u'http://www.todayszaman.com/rss?sectionId=109'),
|
( u'National', u'http://www.todayszaman.com/101.rss'),
|
||||||
( u'Arts & Culture', u'http://www.todayszaman.com/rss?sectionId=110'),
|
( u'Diplomacy', u'http://www.todayszaman.com/102.rss'),
|
||||||
( u'Expat Zone', u'http://www.todayszaman.com/rss?sectionId=132'),
|
( u'World', u'http://www.todayszaman.com/104.rss'),
|
||||||
( u'Sports', u'http://www.todayszaman.com/rss?sectionId=5'),
|
( u'Business', u'http://www.todayszaman.com/105.rss'),
|
||||||
( u'Features', u'http://www.todayszaman.com/rss?sectionId=116'),
|
( u'Op-Ed', u'http://www.todayszaman.com/109.rss'),
|
||||||
( u'Travel', u'http://www.todayszaman.com/rss?sectionId=117'),
|
( u'Arts & Culture', u'http://www.todayszaman.com/110.rss'),
|
||||||
( u'Leisure', u'http://www.todayszaman.com/rss?sectionId=118'),
|
( u'Features', u'http://www.todayszaman.com/116.rss'),
|
||||||
( u'Weird But True', u'http://www.todayszaman.com/rss?sectionId=134'),
|
( u'Travel', u'http://www.todayszaman.com/117.rss'),
|
||||||
( u'Life', u'http://www.todayszaman.com/rss?sectionId=133'),
|
( u'Food', u'http://www.todayszaman.com/124.rss'),
|
||||||
( u'Health', u'http://www.todayszaman.com/rss?sectionId=126'),
|
( u'Press Review', u'http://www.todayszaman.com/130.rss'),
|
||||||
( u'Press Review', u'http://www.todayszaman.com/rss?sectionId=130'),
|
( u'Expat Zone', u'http://www.todayszaman.com/132.rss'),
|
||||||
( u'Todays think tanks', u'http://www.todayszaman.com/rss?sectionId=159'),
|
( u'Life', u'http://www.todayszaman.com/133.rss'),
|
||||||
|
( u'Think Tanks', u'http://www.todayszaman.com/159.rss'),
|
||||||
]
|
( u'Almanac', u'http://www.todayszaman.com/161.rss'),
|
||||||
|
( u'Health', u'http://www.todayszaman.com/162.rss'),
|
||||||
|
( u'Fashion & Beauty', u'http://www.todayszaman.com/163.rss'),
|
||||||
|
( u'Science & Technology', u'http://www.todayszaman.com/349.rss'),
|
||||||
|
]
|
||||||
|
|
||||||
#def preprocess_html(self, soup):
|
#def preprocess_html(self, soup):
|
||||||
# return self.adeify_images(soup)
|
# return self.adeify_images(soup)
|
||||||
#def print_version(self, url): #there is a probem caused by table format
|
#def print_version(self, url): #there is a probem caused by table format
|
||||||
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')
|
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')
|
||||||
|
|
||||||
|
|
||||||
|
Binary file not shown.
Binary file not shown.
@ -12,13 +12,13 @@ msgstr ""
|
|||||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||||
"devel@lists.alioth.debian.org>\n"
|
"devel@lists.alioth.debian.org>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2012-12-22 17:18+0000\n"
|
"PO-Revision-Date: 2012-12-31 12:50+0000\n"
|
||||||
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
|
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
|
||||||
"Language-Team: Catalan <linux@softcatala.org>\n"
|
"Language-Team: Catalan <linux@softcatala.org>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2012-12-23 04:38+0000\n"
|
"X-Launchpad-Export-Date: 2013-01-01 04:45+0000\n"
|
||||||
"X-Generator: Launchpad (build 16378)\n"
|
"X-Generator: Launchpad (build 16378)\n"
|
||||||
"Language: ca\n"
|
"Language: ca\n"
|
||||||
|
|
||||||
@ -1744,7 +1744,7 @@ msgstr "Asu (Nigèria)"
|
|||||||
|
|
||||||
#. name for aun
|
#. name for aun
|
||||||
msgid "One; Molmo"
|
msgid "One; Molmo"
|
||||||
msgstr "One; Molmo"
|
msgstr "Oneià; Molmo"
|
||||||
|
|
||||||
#. name for auo
|
#. name for auo
|
||||||
msgid "Auyokawa"
|
msgid "Auyokawa"
|
||||||
@ -1964,7 +1964,7 @@ msgstr "Leyigha"
|
|||||||
|
|
||||||
#. name for ayk
|
#. name for ayk
|
||||||
msgid "Akuku"
|
msgid "Akuku"
|
||||||
msgstr "Akuku"
|
msgstr "Okpe-Idesa-Akuku; Akuku"
|
||||||
|
|
||||||
#. name for ayl
|
#. name for ayl
|
||||||
msgid "Arabic; Libyan"
|
msgid "Arabic; Libyan"
|
||||||
@ -9984,7 +9984,7 @@ msgstr "Indri"
|
|||||||
|
|
||||||
#. name for ids
|
#. name for ids
|
||||||
msgid "Idesa"
|
msgid "Idesa"
|
||||||
msgstr "Idesa"
|
msgstr "Okpe-Idesa-Akuku; Idesa"
|
||||||
|
|
||||||
#. name for idt
|
#. name for idt
|
||||||
msgid "Idaté"
|
msgid "Idaté"
|
||||||
@ -19524,7 +19524,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for obi
|
#. name for obi
|
||||||
msgid "Obispeño"
|
msgid "Obispeño"
|
||||||
msgstr ""
|
msgstr "Obispeño"
|
||||||
|
|
||||||
#. name for obk
|
#. name for obk
|
||||||
msgid "Bontok; Southern"
|
msgid "Bontok; Southern"
|
||||||
@ -19532,7 +19532,7 @@ msgstr "Bontoc; meridional"
|
|||||||
|
|
||||||
#. name for obl
|
#. name for obl
|
||||||
msgid "Oblo"
|
msgid "Oblo"
|
||||||
msgstr ""
|
msgstr "Oblo"
|
||||||
|
|
||||||
#. name for obm
|
#. name for obm
|
||||||
msgid "Moabite"
|
msgid "Moabite"
|
||||||
@ -19552,11 +19552,11 @@ msgstr "Bretó; antic"
|
|||||||
|
|
||||||
#. name for obu
|
#. name for obu
|
||||||
msgid "Obulom"
|
msgid "Obulom"
|
||||||
msgstr ""
|
msgstr "Obulom"
|
||||||
|
|
||||||
#. name for oca
|
#. name for oca
|
||||||
msgid "Ocaina"
|
msgid "Ocaina"
|
||||||
msgstr ""
|
msgstr "Ocaina"
|
||||||
|
|
||||||
#. name for och
|
#. name for och
|
||||||
msgid "Chinese; Old"
|
msgid "Chinese; Old"
|
||||||
@ -19576,11 +19576,11 @@ msgstr "Matlazinca; Atzingo"
|
|||||||
|
|
||||||
#. name for oda
|
#. name for oda
|
||||||
msgid "Odut"
|
msgid "Odut"
|
||||||
msgstr ""
|
msgstr "Odut"
|
||||||
|
|
||||||
#. name for odk
|
#. name for odk
|
||||||
msgid "Od"
|
msgid "Od"
|
||||||
msgstr ""
|
msgstr "Od"
|
||||||
|
|
||||||
#. name for odt
|
#. name for odt
|
||||||
msgid "Dutch; Old"
|
msgid "Dutch; Old"
|
||||||
@ -19588,11 +19588,11 @@ msgstr "Holandès; antic"
|
|||||||
|
|
||||||
#. name for odu
|
#. name for odu
|
||||||
msgid "Odual"
|
msgid "Odual"
|
||||||
msgstr ""
|
msgstr "Odual"
|
||||||
|
|
||||||
#. name for ofo
|
#. name for ofo
|
||||||
msgid "Ofo"
|
msgid "Ofo"
|
||||||
msgstr ""
|
msgstr "Ofo"
|
||||||
|
|
||||||
#. name for ofs
|
#. name for ofs
|
||||||
msgid "Frisian; Old"
|
msgid "Frisian; Old"
|
||||||
@ -19604,11 +19604,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogb
|
#. name for ogb
|
||||||
msgid "Ogbia"
|
msgid "Ogbia"
|
||||||
msgstr ""
|
msgstr "Ogbia"
|
||||||
|
|
||||||
#. name for ogc
|
#. name for ogc
|
||||||
msgid "Ogbah"
|
msgid "Ogbah"
|
||||||
msgstr ""
|
msgstr "Ogbah"
|
||||||
|
|
||||||
#. name for oge
|
#. name for oge
|
||||||
msgid "Georgian; Old"
|
msgid "Georgian; Old"
|
||||||
@ -19616,7 +19616,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogg
|
#. name for ogg
|
||||||
msgid "Ogbogolo"
|
msgid "Ogbogolo"
|
||||||
msgstr ""
|
msgstr "Ogbogolo"
|
||||||
|
|
||||||
#. name for ogo
|
#. name for ogo
|
||||||
msgid "Khana"
|
msgid "Khana"
|
||||||
@ -19624,7 +19624,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogu
|
#. name for ogu
|
||||||
msgid "Ogbronuagum"
|
msgid "Ogbronuagum"
|
||||||
msgstr ""
|
msgstr "Ogbronuagum"
|
||||||
|
|
||||||
#. name for oht
|
#. name for oht
|
||||||
msgid "Hittite; Old"
|
msgid "Hittite; Old"
|
||||||
@ -19636,27 +19636,27 @@ msgstr "Hongarès; antic"
|
|||||||
|
|
||||||
#. name for oia
|
#. name for oia
|
||||||
msgid "Oirata"
|
msgid "Oirata"
|
||||||
msgstr ""
|
msgstr "Oirata"
|
||||||
|
|
||||||
#. name for oin
|
#. name for oin
|
||||||
msgid "One; Inebu"
|
msgid "One; Inebu"
|
||||||
msgstr ""
|
msgstr "Oneià; Inebu"
|
||||||
|
|
||||||
#. name for ojb
|
#. name for ojb
|
||||||
msgid "Ojibwa; Northwestern"
|
msgid "Ojibwa; Northwestern"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Nordoccidental"
|
||||||
|
|
||||||
#. name for ojc
|
#. name for ojc
|
||||||
msgid "Ojibwa; Central"
|
msgid "Ojibwa; Central"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Central"
|
||||||
|
|
||||||
#. name for ojg
|
#. name for ojg
|
||||||
msgid "Ojibwa; Eastern"
|
msgid "Ojibwa; Eastern"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Oriental"
|
||||||
|
|
||||||
#. name for oji
|
#. name for oji
|
||||||
msgid "Ojibwa"
|
msgid "Ojibwa"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Occidental"
|
||||||
|
|
||||||
#. name for ojp
|
#. name for ojp
|
||||||
msgid "Japanese; Old"
|
msgid "Japanese; Old"
|
||||||
@ -19664,11 +19664,11 @@ msgstr "Japonès; antic"
|
|||||||
|
|
||||||
#. name for ojs
|
#. name for ojs
|
||||||
msgid "Ojibwa; Severn"
|
msgid "Ojibwa; Severn"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Severn"
|
||||||
|
|
||||||
#. name for ojv
|
#. name for ojv
|
||||||
msgid "Ontong Java"
|
msgid "Ontong Java"
|
||||||
msgstr ""
|
msgstr "Ontong Java"
|
||||||
|
|
||||||
#. name for ojw
|
#. name for ojw
|
||||||
msgid "Ojibwa; Western"
|
msgid "Ojibwa; Western"
|
||||||
@ -19676,19 +19676,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oka
|
#. name for oka
|
||||||
msgid "Okanagan"
|
msgid "Okanagan"
|
||||||
msgstr ""
|
msgstr "Colville-Okanagà"
|
||||||
|
|
||||||
#. name for okb
|
#. name for okb
|
||||||
msgid "Okobo"
|
msgid "Okobo"
|
||||||
msgstr ""
|
msgstr "Okobo"
|
||||||
|
|
||||||
#. name for okd
|
#. name for okd
|
||||||
msgid "Okodia"
|
msgid "Okodia"
|
||||||
msgstr ""
|
msgstr "Okodia"
|
||||||
|
|
||||||
#. name for oke
|
#. name for oke
|
||||||
msgid "Okpe (Southwestern Edo)"
|
msgid "Okpe (Southwestern Edo)"
|
||||||
msgstr ""
|
msgstr "Okpe"
|
||||||
|
|
||||||
#. name for okh
|
#. name for okh
|
||||||
msgid "Koresh-e Rostam"
|
msgid "Koresh-e Rostam"
|
||||||
@ -19696,15 +19696,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oki
|
#. name for oki
|
||||||
msgid "Okiek"
|
msgid "Okiek"
|
||||||
msgstr ""
|
msgstr "Okiek"
|
||||||
|
|
||||||
#. name for okj
|
#. name for okj
|
||||||
msgid "Oko-Juwoi"
|
msgid "Oko-Juwoi"
|
||||||
msgstr ""
|
msgstr "Oko-Juwoi"
|
||||||
|
|
||||||
#. name for okk
|
#. name for okk
|
||||||
msgid "One; Kwamtim"
|
msgid "One; Kwamtim"
|
||||||
msgstr ""
|
msgstr "Oneià; Kwamtim"
|
||||||
|
|
||||||
#. name for okl
|
#. name for okl
|
||||||
msgid "Kentish Sign Language; Old"
|
msgid "Kentish Sign Language; Old"
|
||||||
@ -19716,7 +19716,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for okn
|
#. name for okn
|
||||||
msgid "Oki-No-Erabu"
|
msgid "Oki-No-Erabu"
|
||||||
msgstr ""
|
msgstr "Oki-No-Erabu"
|
||||||
|
|
||||||
#. name for oko
|
#. name for oko
|
||||||
msgid "Korean; Old (3rd-9th cent.)"
|
msgid "Korean; Old (3rd-9th cent.)"
|
||||||
@ -19728,19 +19728,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oks
|
#. name for oks
|
||||||
msgid "Oko-Eni-Osayen"
|
msgid "Oko-Eni-Osayen"
|
||||||
msgstr ""
|
msgstr "Oko-Eni-Osayen"
|
||||||
|
|
||||||
#. name for oku
|
#. name for oku
|
||||||
msgid "Oku"
|
msgid "Oku"
|
||||||
msgstr ""
|
msgstr "Oku"
|
||||||
|
|
||||||
#. name for okv
|
#. name for okv
|
||||||
msgid "Orokaiva"
|
msgid "Orokaiva"
|
||||||
msgstr ""
|
msgstr "Orokaiwa"
|
||||||
|
|
||||||
#. name for okx
|
#. name for okx
|
||||||
msgid "Okpe (Northwestern Edo)"
|
msgid "Okpe (Northwestern Edo)"
|
||||||
msgstr ""
|
msgstr "Okpe-Idesa-Akuku; Okpe"
|
||||||
|
|
||||||
#. name for ola
|
#. name for ola
|
||||||
msgid "Walungge"
|
msgid "Walungge"
|
||||||
@ -19752,11 +19752,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ole
|
#. name for ole
|
||||||
msgid "Olekha"
|
msgid "Olekha"
|
||||||
msgstr ""
|
msgstr "Olekha"
|
||||||
|
|
||||||
#. name for olm
|
#. name for olm
|
||||||
msgid "Oloma"
|
msgid "Oloma"
|
||||||
msgstr ""
|
msgstr "Oloma"
|
||||||
|
|
||||||
#. name for olo
|
#. name for olo
|
||||||
msgid "Livvi"
|
msgid "Livvi"
|
||||||
@ -19768,7 +19768,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oma
|
#. name for oma
|
||||||
msgid "Omaha-Ponca"
|
msgid "Omaha-Ponca"
|
||||||
msgstr ""
|
msgstr "Omaha-Ponca"
|
||||||
|
|
||||||
#. name for omb
|
#. name for omb
|
||||||
msgid "Ambae; East"
|
msgid "Ambae; East"
|
||||||
@ -19780,23 +19780,23 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ome
|
#. name for ome
|
||||||
msgid "Omejes"
|
msgid "Omejes"
|
||||||
msgstr ""
|
msgstr "Omejes"
|
||||||
|
|
||||||
#. name for omg
|
#. name for omg
|
||||||
msgid "Omagua"
|
msgid "Omagua"
|
||||||
msgstr ""
|
msgstr "Omagua"
|
||||||
|
|
||||||
#. name for omi
|
#. name for omi
|
||||||
msgid "Omi"
|
msgid "Omi"
|
||||||
msgstr ""
|
msgstr "Omi"
|
||||||
|
|
||||||
#. name for omk
|
#. name for omk
|
||||||
msgid "Omok"
|
msgid "Omok"
|
||||||
msgstr ""
|
msgstr "Omok"
|
||||||
|
|
||||||
#. name for oml
|
#. name for oml
|
||||||
msgid "Ombo"
|
msgid "Ombo"
|
||||||
msgstr ""
|
msgstr "Ombo"
|
||||||
|
|
||||||
#. name for omn
|
#. name for omn
|
||||||
msgid "Minoan"
|
msgid "Minoan"
|
||||||
@ -19816,11 +19816,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for omt
|
#. name for omt
|
||||||
msgid "Omotik"
|
msgid "Omotik"
|
||||||
msgstr ""
|
msgstr "Omotik"
|
||||||
|
|
||||||
#. name for omu
|
#. name for omu
|
||||||
msgid "Omurano"
|
msgid "Omurano"
|
||||||
msgstr ""
|
msgstr "Omurano"
|
||||||
|
|
||||||
#. name for omw
|
#. name for omw
|
||||||
msgid "Tairora; South"
|
msgid "Tairora; South"
|
||||||
@ -19832,7 +19832,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ona
|
#. name for ona
|
||||||
msgid "Ona"
|
msgid "Ona"
|
||||||
msgstr ""
|
msgstr "Ona"
|
||||||
|
|
||||||
#. name for onb
|
#. name for onb
|
||||||
msgid "Lingao"
|
msgid "Lingao"
|
||||||
@ -19840,31 +19840,31 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for one
|
#. name for one
|
||||||
msgid "Oneida"
|
msgid "Oneida"
|
||||||
msgstr ""
|
msgstr "Oneida"
|
||||||
|
|
||||||
#. name for ong
|
#. name for ong
|
||||||
msgid "Olo"
|
msgid "Olo"
|
||||||
msgstr ""
|
msgstr "Olo"
|
||||||
|
|
||||||
#. name for oni
|
#. name for oni
|
||||||
msgid "Onin"
|
msgid "Onin"
|
||||||
msgstr ""
|
msgstr "Onin"
|
||||||
|
|
||||||
#. name for onj
|
#. name for onj
|
||||||
msgid "Onjob"
|
msgid "Onjob"
|
||||||
msgstr ""
|
msgstr "Onjob"
|
||||||
|
|
||||||
#. name for onk
|
#. name for onk
|
||||||
msgid "One; Kabore"
|
msgid "One; Kabore"
|
||||||
msgstr ""
|
msgstr "Oneià; Kabore"
|
||||||
|
|
||||||
#. name for onn
|
#. name for onn
|
||||||
msgid "Onobasulu"
|
msgid "Onobasulu"
|
||||||
msgstr ""
|
msgstr "Onobasulu"
|
||||||
|
|
||||||
#. name for ono
|
#. name for ono
|
||||||
msgid "Onondaga"
|
msgid "Onondaga"
|
||||||
msgstr ""
|
msgstr "Onondaga"
|
||||||
|
|
||||||
#. name for onp
|
#. name for onp
|
||||||
msgid "Sartang"
|
msgid "Sartang"
|
||||||
@ -19872,15 +19872,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for onr
|
#. name for onr
|
||||||
msgid "One; Northern"
|
msgid "One; Northern"
|
||||||
msgstr ""
|
msgstr "Oneià; Septentrional"
|
||||||
|
|
||||||
#. name for ons
|
#. name for ons
|
||||||
msgid "Ono"
|
msgid "Ono"
|
||||||
msgstr ""
|
msgstr "Ono"
|
||||||
|
|
||||||
#. name for ont
|
#. name for ont
|
||||||
msgid "Ontenu"
|
msgid "Ontenu"
|
||||||
msgstr ""
|
msgstr "Ontenu"
|
||||||
|
|
||||||
#. name for onu
|
#. name for onu
|
||||||
msgid "Unua"
|
msgid "Unua"
|
||||||
@ -19900,23 +19900,23 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oog
|
#. name for oog
|
||||||
msgid "Ong"
|
msgid "Ong"
|
||||||
msgstr ""
|
msgstr "Ong"
|
||||||
|
|
||||||
#. name for oon
|
#. name for oon
|
||||||
msgid "Önge"
|
msgid "Önge"
|
||||||
msgstr ""
|
msgstr "Onge"
|
||||||
|
|
||||||
#. name for oor
|
#. name for oor
|
||||||
msgid "Oorlams"
|
msgid "Oorlams"
|
||||||
msgstr ""
|
msgstr "Oorlams"
|
||||||
|
|
||||||
#. name for oos
|
#. name for oos
|
||||||
msgid "Ossetic; Old"
|
msgid "Ossetic; Old"
|
||||||
msgstr ""
|
msgstr "Osset"
|
||||||
|
|
||||||
#. name for opa
|
#. name for opa
|
||||||
msgid "Okpamheri"
|
msgid "Okpamheri"
|
||||||
msgstr ""
|
msgstr "Okpamheri"
|
||||||
|
|
||||||
#. name for opk
|
#. name for opk
|
||||||
msgid "Kopkaka"
|
msgid "Kopkaka"
|
||||||
@ -19924,39 +19924,39 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for opm
|
#. name for opm
|
||||||
msgid "Oksapmin"
|
msgid "Oksapmin"
|
||||||
msgstr ""
|
msgstr "Oksapmin"
|
||||||
|
|
||||||
#. name for opo
|
#. name for opo
|
||||||
msgid "Opao"
|
msgid "Opao"
|
||||||
msgstr ""
|
msgstr "Opao"
|
||||||
|
|
||||||
#. name for opt
|
#. name for opt
|
||||||
msgid "Opata"
|
msgid "Opata"
|
||||||
msgstr ""
|
msgstr "Opata"
|
||||||
|
|
||||||
#. name for opy
|
#. name for opy
|
||||||
msgid "Ofayé"
|
msgid "Ofayé"
|
||||||
msgstr ""
|
msgstr "Opaie"
|
||||||
|
|
||||||
#. name for ora
|
#. name for ora
|
||||||
msgid "Oroha"
|
msgid "Oroha"
|
||||||
msgstr ""
|
msgstr "Oroha"
|
||||||
|
|
||||||
#. name for orc
|
#. name for orc
|
||||||
msgid "Orma"
|
msgid "Orma"
|
||||||
msgstr ""
|
msgstr "Orma"
|
||||||
|
|
||||||
#. name for ore
|
#. name for ore
|
||||||
msgid "Orejón"
|
msgid "Orejón"
|
||||||
msgstr ""
|
msgstr "Orejon"
|
||||||
|
|
||||||
#. name for org
|
#. name for org
|
||||||
msgid "Oring"
|
msgid "Oring"
|
||||||
msgstr ""
|
msgstr "Oring"
|
||||||
|
|
||||||
#. name for orh
|
#. name for orh
|
||||||
msgid "Oroqen"
|
msgid "Oroqen"
|
||||||
msgstr ""
|
msgstr "Orotxen"
|
||||||
|
|
||||||
#. name for ori
|
#. name for ori
|
||||||
msgid "Oriya"
|
msgid "Oriya"
|
||||||
@ -19968,19 +19968,19 @@ msgstr "Oromo"
|
|||||||
|
|
||||||
#. name for orn
|
#. name for orn
|
||||||
msgid "Orang Kanaq"
|
msgid "Orang Kanaq"
|
||||||
msgstr ""
|
msgstr "Orang; Kanaq"
|
||||||
|
|
||||||
#. name for oro
|
#. name for oro
|
||||||
msgid "Orokolo"
|
msgid "Orokolo"
|
||||||
msgstr ""
|
msgstr "Orocolo"
|
||||||
|
|
||||||
#. name for orr
|
#. name for orr
|
||||||
msgid "Oruma"
|
msgid "Oruma"
|
||||||
msgstr ""
|
msgstr "Oruma"
|
||||||
|
|
||||||
#. name for ors
|
#. name for ors
|
||||||
msgid "Orang Seletar"
|
msgid "Orang Seletar"
|
||||||
msgstr ""
|
msgstr "Orang; Seletar"
|
||||||
|
|
||||||
#. name for ort
|
#. name for ort
|
||||||
msgid "Oriya; Adivasi"
|
msgid "Oriya; Adivasi"
|
||||||
@ -19988,7 +19988,7 @@ msgstr "Oriya; Adivasi"
|
|||||||
|
|
||||||
#. name for oru
|
#. name for oru
|
||||||
msgid "Ormuri"
|
msgid "Ormuri"
|
||||||
msgstr ""
|
msgstr "Ormuri"
|
||||||
|
|
||||||
#. name for orv
|
#. name for orv
|
||||||
msgid "Russian; Old"
|
msgid "Russian; Old"
|
||||||
@ -19996,31 +19996,31 @@ msgstr "Rus; antic"
|
|||||||
|
|
||||||
#. name for orw
|
#. name for orw
|
||||||
msgid "Oro Win"
|
msgid "Oro Win"
|
||||||
msgstr ""
|
msgstr "Oro Win"
|
||||||
|
|
||||||
#. name for orx
|
#. name for orx
|
||||||
msgid "Oro"
|
msgid "Oro"
|
||||||
msgstr ""
|
msgstr "Oro"
|
||||||
|
|
||||||
#. name for orz
|
#. name for orz
|
||||||
msgid "Ormu"
|
msgid "Ormu"
|
||||||
msgstr ""
|
msgstr "Ormu"
|
||||||
|
|
||||||
#. name for osa
|
#. name for osa
|
||||||
msgid "Osage"
|
msgid "Osage"
|
||||||
msgstr ""
|
msgstr "Osage"
|
||||||
|
|
||||||
#. name for osc
|
#. name for osc
|
||||||
msgid "Oscan"
|
msgid "Oscan"
|
||||||
msgstr ""
|
msgstr "Osc"
|
||||||
|
|
||||||
#. name for osi
|
#. name for osi
|
||||||
msgid "Osing"
|
msgid "Osing"
|
||||||
msgstr ""
|
msgstr "Osing"
|
||||||
|
|
||||||
#. name for oso
|
#. name for oso
|
||||||
msgid "Ososo"
|
msgid "Ososo"
|
||||||
msgstr ""
|
msgstr "Ososo"
|
||||||
|
|
||||||
#. name for osp
|
#. name for osp
|
||||||
msgid "Spanish; Old"
|
msgid "Spanish; Old"
|
||||||
@ -20028,15 +20028,15 @@ msgstr "Espanyol; antic"
|
|||||||
|
|
||||||
#. name for oss
|
#. name for oss
|
||||||
msgid "Ossetian"
|
msgid "Ossetian"
|
||||||
msgstr ""
|
msgstr "Osset"
|
||||||
|
|
||||||
#. name for ost
|
#. name for ost
|
||||||
msgid "Osatu"
|
msgid "Osatu"
|
||||||
msgstr ""
|
msgstr "Osatu"
|
||||||
|
|
||||||
#. name for osu
|
#. name for osu
|
||||||
msgid "One; Southern"
|
msgid "One; Southern"
|
||||||
msgstr ""
|
msgstr "One; Meridional"
|
||||||
|
|
||||||
#. name for osx
|
#. name for osx
|
||||||
msgid "Saxon; Old"
|
msgid "Saxon; Old"
|
||||||
@ -20052,15 +20052,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for otd
|
#. name for otd
|
||||||
msgid "Ot Danum"
|
msgid "Ot Danum"
|
||||||
msgstr ""
|
msgstr "Dohoi"
|
||||||
|
|
||||||
#. name for ote
|
#. name for ote
|
||||||
msgid "Otomi; Mezquital"
|
msgid "Otomi; Mezquital"
|
||||||
msgstr ""
|
msgstr "Otomí; Mezquital"
|
||||||
|
|
||||||
#. name for oti
|
#. name for oti
|
||||||
msgid "Oti"
|
msgid "Oti"
|
||||||
msgstr ""
|
msgstr "Oti"
|
||||||
|
|
||||||
#. name for otk
|
#. name for otk
|
||||||
msgid "Turkish; Old"
|
msgid "Turkish; Old"
|
||||||
@ -20068,43 +20068,43 @@ msgstr "Turc; antic"
|
|||||||
|
|
||||||
#. name for otl
|
#. name for otl
|
||||||
msgid "Otomi; Tilapa"
|
msgid "Otomi; Tilapa"
|
||||||
msgstr ""
|
msgstr "Otomí; Tilapa"
|
||||||
|
|
||||||
#. name for otm
|
#. name for otm
|
||||||
msgid "Otomi; Eastern Highland"
|
msgid "Otomi; Eastern Highland"
|
||||||
msgstr ""
|
msgstr "Otomí; Oriental"
|
||||||
|
|
||||||
#. name for otn
|
#. name for otn
|
||||||
msgid "Otomi; Tenango"
|
msgid "Otomi; Tenango"
|
||||||
msgstr ""
|
msgstr "Otomí; Tenango"
|
||||||
|
|
||||||
#. name for otq
|
#. name for otq
|
||||||
msgid "Otomi; Querétaro"
|
msgid "Otomi; Querétaro"
|
||||||
msgstr ""
|
msgstr "Otomí; Queretaro"
|
||||||
|
|
||||||
#. name for otr
|
#. name for otr
|
||||||
msgid "Otoro"
|
msgid "Otoro"
|
||||||
msgstr ""
|
msgstr "Otoro"
|
||||||
|
|
||||||
#. name for ots
|
#. name for ots
|
||||||
msgid "Otomi; Estado de México"
|
msgid "Otomi; Estado de México"
|
||||||
msgstr ""
|
msgstr "Otomí; Estat de Mèxic"
|
||||||
|
|
||||||
#. name for ott
|
#. name for ott
|
||||||
msgid "Otomi; Temoaya"
|
msgid "Otomi; Temoaya"
|
||||||
msgstr ""
|
msgstr "Otomí; Temoaya"
|
||||||
|
|
||||||
#. name for otu
|
#. name for otu
|
||||||
msgid "Otuke"
|
msgid "Otuke"
|
||||||
msgstr ""
|
msgstr "Otuke"
|
||||||
|
|
||||||
#. name for otw
|
#. name for otw
|
||||||
msgid "Ottawa"
|
msgid "Ottawa"
|
||||||
msgstr ""
|
msgstr "Ottawa"
|
||||||
|
|
||||||
#. name for otx
|
#. name for otx
|
||||||
msgid "Otomi; Texcatepec"
|
msgid "Otomi; Texcatepec"
|
||||||
msgstr ""
|
msgstr "Otomí; Texcatepec"
|
||||||
|
|
||||||
#. name for oty
|
#. name for oty
|
||||||
msgid "Tamil; Old"
|
msgid "Tamil; Old"
|
||||||
@ -20112,7 +20112,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for otz
|
#. name for otz
|
||||||
msgid "Otomi; Ixtenco"
|
msgid "Otomi; Ixtenco"
|
||||||
msgstr ""
|
msgstr "Otomí; Ixtenc"
|
||||||
|
|
||||||
#. name for oua
|
#. name for oua
|
||||||
msgid "Tagargrent"
|
msgid "Tagargrent"
|
||||||
@ -20124,7 +20124,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oue
|
#. name for oue
|
||||||
msgid "Oune"
|
msgid "Oune"
|
||||||
msgstr ""
|
msgstr "Oune"
|
||||||
|
|
||||||
#. name for oui
|
#. name for oui
|
||||||
msgid "Uighur; Old"
|
msgid "Uighur; Old"
|
||||||
@ -20132,15 +20132,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oum
|
#. name for oum
|
||||||
msgid "Ouma"
|
msgid "Ouma"
|
||||||
msgstr ""
|
msgstr "Ouma"
|
||||||
|
|
||||||
#. name for oun
|
#. name for oun
|
||||||
msgid "!O!ung"
|
msgid "!O!ung"
|
||||||
msgstr ""
|
msgstr "Oung"
|
||||||
|
|
||||||
#. name for owi
|
#. name for owi
|
||||||
msgid "Owiniga"
|
msgid "Owiniga"
|
||||||
msgstr ""
|
msgstr "Owiniga"
|
||||||
|
|
||||||
#. name for owl
|
#. name for owl
|
||||||
msgid "Welsh; Old"
|
msgid "Welsh; Old"
|
||||||
@ -20148,11 +20148,11 @@ msgstr "Gal·lès; antic"
|
|||||||
|
|
||||||
#. name for oyb
|
#. name for oyb
|
||||||
msgid "Oy"
|
msgid "Oy"
|
||||||
msgstr ""
|
msgstr "Oy"
|
||||||
|
|
||||||
#. name for oyd
|
#. name for oyd
|
||||||
msgid "Oyda"
|
msgid "Oyda"
|
||||||
msgstr ""
|
msgstr "Oyda"
|
||||||
|
|
||||||
#. name for oym
|
#. name for oym
|
||||||
msgid "Wayampi"
|
msgid "Wayampi"
|
||||||
@ -20160,7 +20160,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oyy
|
#. name for oyy
|
||||||
msgid "Oya'oya"
|
msgid "Oya'oya"
|
||||||
msgstr ""
|
msgstr "Oya'oya"
|
||||||
|
|
||||||
#. name for ozm
|
#. name for ozm
|
||||||
msgid "Koonzime"
|
msgid "Koonzime"
|
||||||
@ -20168,27 +20168,27 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pab
|
#. name for pab
|
||||||
msgid "Parecís"
|
msgid "Parecís"
|
||||||
msgstr ""
|
msgstr "Pareci"
|
||||||
|
|
||||||
#. name for pac
|
#. name for pac
|
||||||
msgid "Pacoh"
|
msgid "Pacoh"
|
||||||
msgstr ""
|
msgstr "Pacoh"
|
||||||
|
|
||||||
#. name for pad
|
#. name for pad
|
||||||
msgid "Paumarí"
|
msgid "Paumarí"
|
||||||
msgstr ""
|
msgstr "Paumarí"
|
||||||
|
|
||||||
#. name for pae
|
#. name for pae
|
||||||
msgid "Pagibete"
|
msgid "Pagibete"
|
||||||
msgstr ""
|
msgstr "Pagibete"
|
||||||
|
|
||||||
#. name for paf
|
#. name for paf
|
||||||
msgid "Paranawát"
|
msgid "Paranawát"
|
||||||
msgstr ""
|
msgstr "Paranawat"
|
||||||
|
|
||||||
#. name for pag
|
#. name for pag
|
||||||
msgid "Pangasinan"
|
msgid "Pangasinan"
|
||||||
msgstr ""
|
msgstr "Pangasi"
|
||||||
|
|
||||||
#. name for pah
|
#. name for pah
|
||||||
msgid "Tenharim"
|
msgid "Tenharim"
|
||||||
@ -20196,19 +20196,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pai
|
#. name for pai
|
||||||
msgid "Pe"
|
msgid "Pe"
|
||||||
msgstr ""
|
msgstr "Pe"
|
||||||
|
|
||||||
#. name for pak
|
#. name for pak
|
||||||
msgid "Parakanã"
|
msgid "Parakanã"
|
||||||
msgstr ""
|
msgstr "Akwawa; Parakanà"
|
||||||
|
|
||||||
#. name for pal
|
#. name for pal
|
||||||
msgid "Pahlavi"
|
msgid "Pahlavi"
|
||||||
msgstr ""
|
msgstr "Pahlavi"
|
||||||
|
|
||||||
#. name for pam
|
#. name for pam
|
||||||
msgid "Pampanga"
|
msgid "Pampanga"
|
||||||
msgstr ""
|
msgstr "Pampangà"
|
||||||
|
|
||||||
#. name for pan
|
#. name for pan
|
||||||
msgid "Panjabi"
|
msgid "Panjabi"
|
||||||
@ -20220,63 +20220,63 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pap
|
#. name for pap
|
||||||
msgid "Papiamento"
|
msgid "Papiamento"
|
||||||
msgstr ""
|
msgstr "Papiament"
|
||||||
|
|
||||||
#. name for paq
|
#. name for paq
|
||||||
msgid "Parya"
|
msgid "Parya"
|
||||||
msgstr ""
|
msgstr "Parya"
|
||||||
|
|
||||||
#. name for par
|
#. name for par
|
||||||
msgid "Panamint"
|
msgid "Panamint"
|
||||||
msgstr ""
|
msgstr "Panamint"
|
||||||
|
|
||||||
#. name for pas
|
#. name for pas
|
||||||
msgid "Papasena"
|
msgid "Papasena"
|
||||||
msgstr ""
|
msgstr "Papasena"
|
||||||
|
|
||||||
#. name for pat
|
#. name for pat
|
||||||
msgid "Papitalai"
|
msgid "Papitalai"
|
||||||
msgstr ""
|
msgstr "Papitalai"
|
||||||
|
|
||||||
#. name for pau
|
#. name for pau
|
||||||
msgid "Palauan"
|
msgid "Palauan"
|
||||||
msgstr ""
|
msgstr "Palavà"
|
||||||
|
|
||||||
#. name for pav
|
#. name for pav
|
||||||
msgid "Pakaásnovos"
|
msgid "Pakaásnovos"
|
||||||
msgstr ""
|
msgstr "Pakaa Nova"
|
||||||
|
|
||||||
#. name for paw
|
#. name for paw
|
||||||
msgid "Pawnee"
|
msgid "Pawnee"
|
||||||
msgstr ""
|
msgstr "Pawnee"
|
||||||
|
|
||||||
#. name for pax
|
#. name for pax
|
||||||
msgid "Pankararé"
|
msgid "Pankararé"
|
||||||
msgstr ""
|
msgstr "Pankararé"
|
||||||
|
|
||||||
#. name for pay
|
#. name for pay
|
||||||
msgid "Pech"
|
msgid "Pech"
|
||||||
msgstr ""
|
msgstr "Pech"
|
||||||
|
|
||||||
#. name for paz
|
#. name for paz
|
||||||
msgid "Pankararú"
|
msgid "Pankararú"
|
||||||
msgstr ""
|
msgstr "Pankarurú"
|
||||||
|
|
||||||
#. name for pbb
|
#. name for pbb
|
||||||
msgid "Páez"
|
msgid "Páez"
|
||||||
msgstr ""
|
msgstr "Páez"
|
||||||
|
|
||||||
#. name for pbc
|
#. name for pbc
|
||||||
msgid "Patamona"
|
msgid "Patamona"
|
||||||
msgstr ""
|
msgstr "Patamona"
|
||||||
|
|
||||||
#. name for pbe
|
#. name for pbe
|
||||||
msgid "Popoloca; Mezontla"
|
msgid "Popoloca; Mezontla"
|
||||||
msgstr ""
|
msgstr "Popoloca; Mezontla"
|
||||||
|
|
||||||
#. name for pbf
|
#. name for pbf
|
||||||
msgid "Popoloca; Coyotepec"
|
msgid "Popoloca; Coyotepec"
|
||||||
msgstr ""
|
msgstr "Popoloca; Coyotepec"
|
||||||
|
|
||||||
#. name for pbg
|
#. name for pbg
|
||||||
msgid "Paraujano"
|
msgid "Paraujano"
|
||||||
@ -20288,7 +20288,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pbi
|
#. name for pbi
|
||||||
msgid "Parkwa"
|
msgid "Parkwa"
|
||||||
msgstr ""
|
msgstr "Parkwa"
|
||||||
|
|
||||||
#. name for pbl
|
#. name for pbl
|
||||||
msgid "Mak (Nigeria)"
|
msgid "Mak (Nigeria)"
|
||||||
@ -20300,7 +20300,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pbo
|
#. name for pbo
|
||||||
msgid "Papel"
|
msgid "Papel"
|
||||||
msgstr ""
|
msgstr "Papel"
|
||||||
|
|
||||||
#. name for pbp
|
#. name for pbp
|
||||||
msgid "Badyara"
|
msgid "Badyara"
|
||||||
@ -20336,7 +20336,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pca
|
#. name for pca
|
||||||
msgid "Popoloca; Santa Inés Ahuatempan"
|
msgid "Popoloca; Santa Inés Ahuatempan"
|
||||||
msgstr ""
|
msgstr "Popoloca; Ahuatempan"
|
||||||
|
|
||||||
#. name for pcb
|
#. name for pcb
|
||||||
msgid "Pear"
|
msgid "Pear"
|
||||||
@ -20832,7 +20832,7 @@ msgstr "Senufo; Palaka"
|
|||||||
|
|
||||||
#. name for pls
|
#. name for pls
|
||||||
msgid "Popoloca; San Marcos Tlalcoyalco"
|
msgid "Popoloca; San Marcos Tlalcoyalco"
|
||||||
msgstr ""
|
msgstr "Popoloca; Tlalcoyalc"
|
||||||
|
|
||||||
#. name for plt
|
#. name for plt
|
||||||
msgid "Malagasy; Plateau"
|
msgid "Malagasy; Plateau"
|
||||||
@ -21040,7 +21040,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for poe
|
#. name for poe
|
||||||
msgid "Popoloca; San Juan Atzingo"
|
msgid "Popoloca; San Juan Atzingo"
|
||||||
msgstr ""
|
msgstr "Popoloca; Atzingo"
|
||||||
|
|
||||||
#. name for pof
|
#. name for pof
|
||||||
msgid "Poke"
|
msgid "Poke"
|
||||||
@ -21104,7 +21104,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pow
|
#. name for pow
|
||||||
msgid "Popoloca; San Felipe Otlaltepec"
|
msgid "Popoloca; San Felipe Otlaltepec"
|
||||||
msgstr ""
|
msgstr "Popoloca; Otlaltepec"
|
||||||
|
|
||||||
#. name for pox
|
#. name for pox
|
||||||
msgid "Polabian"
|
msgid "Polabian"
|
||||||
@ -21160,7 +21160,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pps
|
#. name for pps
|
||||||
msgid "Popoloca; San Luís Temalacayuca"
|
msgid "Popoloca; San Luís Temalacayuca"
|
||||||
msgstr ""
|
msgstr "Popoloca; Temalacayuca"
|
||||||
|
|
||||||
#. name for ppt
|
#. name for ppt
|
||||||
msgid "Pare"
|
msgid "Pare"
|
||||||
|
@ -9,13 +9,13 @@ msgstr ""
|
|||||||
"Project-Id-Version: calibre\n"
|
"Project-Id-Version: calibre\n"
|
||||||
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2012-12-24 08:05+0000\n"
|
"PO-Revision-Date: 2012-12-28 09:13+0000\n"
|
||||||
"Last-Translator: Adolfo Jayme Barrientos <fitoschido@gmail.com>\n"
|
"Last-Translator: Jellby <Unknown>\n"
|
||||||
"Language-Team: Español; Castellano <>\n"
|
"Language-Team: Español; Castellano <>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2012-12-25 04:46+0000\n"
|
"X-Launchpad-Export-Date: 2012-12-29 05:00+0000\n"
|
||||||
"X-Generator: Launchpad (build 16378)\n"
|
"X-Generator: Launchpad (build 16378)\n"
|
||||||
|
|
||||||
#. name for aaa
|
#. name for aaa
|
||||||
@ -9584,7 +9584,7 @@ msgstr "Holikachuk"
|
|||||||
|
|
||||||
#. name for hoj
|
#. name for hoj
|
||||||
msgid "Hadothi"
|
msgid "Hadothi"
|
||||||
msgstr "Hadothi"
|
msgstr "Hadoti"
|
||||||
|
|
||||||
#. name for hol
|
#. name for hol
|
||||||
msgid "Holu"
|
msgid "Holu"
|
||||||
@ -11796,7 +11796,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for khq
|
#. name for khq
|
||||||
msgid "Songhay; Koyra Chiini"
|
msgid "Songhay; Koyra Chiini"
|
||||||
msgstr ""
|
msgstr "Songhay koyra chiini"
|
||||||
|
|
||||||
#. name for khr
|
#. name for khr
|
||||||
msgid "Kharia"
|
msgid "Kharia"
|
||||||
|
@ -227,9 +227,22 @@ class GetTranslations(Translations): # {{{
|
|||||||
ans.append(line.split()[-1])
|
ans.append(line.split()[-1])
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
|
def resolve_conflicts(self):
|
||||||
|
conflict = False
|
||||||
|
for line in subprocess.check_output(['bzr', 'status']).splitlines():
|
||||||
|
if line == 'conflicts:':
|
||||||
|
conflict = True
|
||||||
|
break
|
||||||
|
if not conflict:
|
||||||
|
raise Exception('bzr merge failed and no conflicts found')
|
||||||
|
subprocess.check_call(['bzr', 'resolve', '--take-other'])
|
||||||
|
|
||||||
def run(self, opts):
|
def run(self, opts):
|
||||||
if not self.modified_translations:
|
if not self.modified_translations:
|
||||||
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
try:
|
||||||
|
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
self.resolve_conflicts()
|
||||||
self.check_for_errors()
|
self.check_for_errors()
|
||||||
|
|
||||||
if self.modified_translations:
|
if self.modified_translations:
|
||||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
__appname__ = u'calibre'
|
__appname__ = u'calibre'
|
||||||
numeric_version = (0, 9, 12)
|
numeric_version = (0, 9, 14)
|
||||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||||
|
|
||||||
|
@ -770,13 +770,25 @@ class PocketBook900Output(OutputProfile):
|
|||||||
dpi = 150.0
|
dpi = 150.0
|
||||||
comic_screen_size = screen_size
|
comic_screen_size = screen_size
|
||||||
|
|
||||||
|
class PocketBookPro912Output(OutputProfile):
|
||||||
|
|
||||||
|
author = 'Daniele Pizzolli'
|
||||||
|
name = 'PocketBook Pro 912'
|
||||||
|
short_name = 'pocketbook_pro_912'
|
||||||
|
description = _('This profile is intended for the PocketBook Pro 912 series of devices.')
|
||||||
|
|
||||||
|
# According to http://download.pocketbook-int.com/user-guides/E_Ink/912/User_Guide_PocketBook_912(EN).pdf
|
||||||
|
screen_size = (825, 1200)
|
||||||
|
dpi = 155.0
|
||||||
|
comic_screen_size = screen_size
|
||||||
|
|
||||||
output_profiles = [OutputProfile, SonyReaderOutput, SonyReader300Output,
|
output_profiles = [OutputProfile, SonyReaderOutput, SonyReader300Output,
|
||||||
SonyReader900Output, MSReaderOutput, MobipocketOutput, HanlinV3Output,
|
SonyReader900Output, MSReaderOutput, MobipocketOutput, HanlinV3Output,
|
||||||
HanlinV5Output, CybookG3Output, CybookOpusOutput, KindleOutput,
|
HanlinV5Output, CybookG3Output, CybookOpusOutput, KindleOutput,
|
||||||
iPadOutput, iPad3Output, KoboReaderOutput, TabletOutput, SamsungGalaxy,
|
iPadOutput, iPad3Output, KoboReaderOutput, TabletOutput, SamsungGalaxy,
|
||||||
SonyReaderLandscapeOutput, KindleDXOutput, IlliadOutput,
|
SonyReaderLandscapeOutput, KindleDXOutput, IlliadOutput,
|
||||||
IRexDR1000Output, IRexDR800Output, JetBook5Output, NookOutput,
|
IRexDR1000Output, IRexDR800Output, JetBook5Output, NookOutput,
|
||||||
BambookOutput, NookColorOutput, PocketBook900Output, GenericEink,
|
BambookOutput, NookColorOutput, PocketBook900Output, PocketBookPro912Output,
|
||||||
GenericEinkLarge, KindleFireOutput, KindlePaperWhiteOutput]
|
GenericEink, GenericEinkLarge, KindleFireOutput, KindlePaperWhiteOutput]
|
||||||
|
|
||||||
output_profiles.sort(cmp=lambda x,y:cmp(x.name.lower(), y.name.lower()))
|
output_profiles.sort(cmp=lambda x,y:cmp(x.name.lower(), y.name.lower()))
|
||||||
|
@ -191,7 +191,7 @@ class ANDROID(USBMS):
|
|||||||
0x10a9 : { 0x6050 : [0x227] },
|
0x10a9 : { 0x6050 : [0x227] },
|
||||||
|
|
||||||
# Prestigio
|
# Prestigio
|
||||||
0x2207 : { 0 : [0x222] },
|
0x2207 : { 0 : [0x222], 0x10 : [0x222] },
|
||||||
|
|
||||||
}
|
}
|
||||||
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
||||||
@ -214,7 +214,7 @@ class ANDROID(USBMS):
|
|||||||
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
||||||
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
||||||
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12',
|
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12',
|
||||||
'MEDIATEK']
|
'MEDIATEK', 'KEENHI']
|
||||||
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'INC.NEXUS_ONE',
|
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'INC.NEXUS_ONE',
|
||||||
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
||||||
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
||||||
@ -234,7 +234,8 @@ class ANDROID(USBMS):
|
|||||||
'THINKPAD_TABLET', 'SGH-T989', 'YP-G70', 'STORAGE_DEVICE',
|
'THINKPAD_TABLET', 'SGH-T989', 'YP-G70', 'STORAGE_DEVICE',
|
||||||
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
||||||
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
||||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS']
|
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS',
|
||||||
|
'ICS']
|
||||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||||
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||||
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
||||||
|
@ -234,7 +234,7 @@ class POCKETBOOK301(USBMS):
|
|||||||
class POCKETBOOK602(USBMS):
|
class POCKETBOOK602(USBMS):
|
||||||
|
|
||||||
name = 'PocketBook Pro 602/902 Device Interface'
|
name = 'PocketBook Pro 602/902 Device Interface'
|
||||||
description = _('Communicate with the PocketBook 602/603/902/903 reader.')
|
description = _('Communicate with the PocketBook 602/603/902/903/Pro 912 reader.')
|
||||||
author = 'Kovid Goyal'
|
author = 'Kovid Goyal'
|
||||||
supported_platforms = ['windows', 'osx', 'linux']
|
supported_platforms = ['windows', 'osx', 'linux']
|
||||||
FORMATS = ['epub', 'fb2', 'prc', 'mobi', 'pdf', 'djvu', 'rtf', 'chm',
|
FORMATS = ['epub', 'fb2', 'prc', 'mobi', 'pdf', 'djvu', 'rtf', 'chm',
|
||||||
@ -249,7 +249,7 @@ class POCKETBOOK602(USBMS):
|
|||||||
|
|
||||||
VENDOR_NAME = ''
|
VENDOR_NAME = ''
|
||||||
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['PB602', 'PB603', 'PB902',
|
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['PB602', 'PB603', 'PB902',
|
||||||
'PB903', 'PB']
|
'PB903', 'Pocket912', 'PB']
|
||||||
|
|
||||||
class POCKETBOOK622(POCKETBOOK602):
|
class POCKETBOOK622(POCKETBOOK602):
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ class KOBO(USBMS):
|
|||||||
|
|
||||||
dbversion = 0
|
dbversion = 0
|
||||||
fwversion = 0
|
fwversion = 0
|
||||||
supported_dbversion = 65
|
supported_dbversion = 75
|
||||||
has_kepubs = False
|
has_kepubs = False
|
||||||
|
|
||||||
supported_platforms = ['windows', 'osx', 'linux']
|
supported_platforms = ['windows', 'osx', 'linux']
|
||||||
|
@ -20,6 +20,9 @@ const calibre_device_entry_t calibre_mtp_device_table[] = {
|
|||||||
, { "Google", 0x18d1, "Nexus 10", 0x4ee2, DEVICE_FLAGS_ANDROID_BUGS}
|
, { "Google", 0x18d1, "Nexus 10", 0x4ee2, DEVICE_FLAGS_ANDROID_BUGS}
|
||||||
, { "Google", 0x18d1, "Nexus 10", 0x4ee1, DEVICE_FLAGS_ANDROID_BUGS}
|
, { "Google", 0x18d1, "Nexus 10", 0x4ee1, DEVICE_FLAGS_ANDROID_BUGS}
|
||||||
|
|
||||||
|
// Kobo Arc
|
||||||
|
, { "Kobo", 0x2237, "Arc", 0xd108, DEVICE_FLAGS_ANDROID_BUGS}
|
||||||
|
|
||||||
, { NULL, 0xffff, NULL, 0xffff, DEVICE_FLAG_NONE }
|
, { NULL, 0xffff, NULL, 0xffff, DEVICE_FLAG_NONE }
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ from collections import namedtuple
|
|||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
||||||
from calibre import prints, as_unicode
|
from calibre import prints, as_unicode
|
||||||
from calibre.constants import plugins
|
from calibre.constants import plugins, islinux
|
||||||
from calibre.ptempfile import SpooledTemporaryFile
|
from calibre.ptempfile import SpooledTemporaryFile
|
||||||
from calibre.devices.errors import OpenFailed, DeviceError, BlacklistedDevice
|
from calibre.devices.errors import OpenFailed, DeviceError, BlacklistedDevice
|
||||||
from calibre.devices.mtp.base import MTPDeviceBase, synchronous, debug
|
from calibre.devices.mtp.base import MTPDeviceBase, synchronous, debug
|
||||||
@ -44,6 +44,17 @@ class MTP_DEVICE(MTPDeviceBase):
|
|||||||
self.blacklisted_devices = set()
|
self.blacklisted_devices = set()
|
||||||
self.ejected_devices = set()
|
self.ejected_devices = set()
|
||||||
self.currently_connected_dev = None
|
self.currently_connected_dev = None
|
||||||
|
self._is_device_mtp = None
|
||||||
|
if islinux:
|
||||||
|
from calibre.devices.mtp.unix.sysfs import MTPDetect
|
||||||
|
self._is_device_mtp = MTPDetect()
|
||||||
|
|
||||||
|
def is_device_mtp(self, d, debug=None):
|
||||||
|
''' Returns True iff the _is_device_mtp check returns True and libmtp
|
||||||
|
is able to probe the device successfully. '''
|
||||||
|
if self._is_device_mtp is None: return False
|
||||||
|
return (self._is_device_mtp(d, debug=debug) and
|
||||||
|
self.libmtp.is_mtp_device(d.busnum, d.devnum))
|
||||||
|
|
||||||
def set_debug_level(self, lvl):
|
def set_debug_level(self, lvl):
|
||||||
self.libmtp.set_debug_level(lvl)
|
self.libmtp.set_debug_level(lvl)
|
||||||
@ -77,7 +88,9 @@ class MTP_DEVICE(MTPDeviceBase):
|
|||||||
for d in devs:
|
for d in devs:
|
||||||
ans = cache.get(d, None)
|
ans = cache.get(d, None)
|
||||||
if ans is None:
|
if ans is None:
|
||||||
ans = (d.vendor_id, d.product_id) in self.known_devices
|
ans = (
|
||||||
|
(d.vendor_id, d.product_id) in self.known_devices or
|
||||||
|
self.is_device_mtp(d))
|
||||||
cache[d] = ans
|
cache[d] = ans
|
||||||
if ans:
|
if ans:
|
||||||
return d
|
return d
|
||||||
@ -95,12 +108,13 @@ class MTP_DEVICE(MTPDeviceBase):
|
|||||||
err = 'startup() not called on this device driver'
|
err = 'startup() not called on this device driver'
|
||||||
p(err)
|
p(err)
|
||||||
return False
|
return False
|
||||||
devs = [d for d in devices_on_system if (d.vendor_id, d.product_id)
|
devs = [d for d in devices_on_system if
|
||||||
in self.known_devices and d.vendor_id != APPLE]
|
( (d.vendor_id, d.product_id) in self.known_devices or
|
||||||
|
self.is_device_mtp(d, debug=p)) and d.vendor_id != APPLE]
|
||||||
if not devs:
|
if not devs:
|
||||||
p('No known MTP devices connected to system')
|
p('No MTP devices connected to system')
|
||||||
return False
|
return False
|
||||||
p('Known MTP devices connected:')
|
p('MTP devices connected:')
|
||||||
for d in devs: p(d)
|
for d in devs: p(d)
|
||||||
|
|
||||||
for d in devs:
|
for d in devs:
|
||||||
|
@ -662,13 +662,6 @@ is_mtp_device(PyObject *self, PyObject *args) {
|
|||||||
|
|
||||||
if (!PyArg_ParseTuple(args, "ii", &busnum, &devnum)) return NULL;
|
if (!PyArg_ParseTuple(args, "ii", &busnum, &devnum)) return NULL;
|
||||||
|
|
||||||
/*
|
|
||||||
* LIBMTP_Check_Specific_Device does not seem to work at least on my linux
|
|
||||||
* system. Need to investigate why later. Most devices are in the device
|
|
||||||
* table so this is not terribly important.
|
|
||||||
*/
|
|
||||||
/* LIBMTP_Set_Debug(LIBMTP_DEBUG_ALL); */
|
|
||||||
/* printf("Calling check: %d %d\n", busnum, devnum); */
|
|
||||||
Py_BEGIN_ALLOW_THREADS;
|
Py_BEGIN_ALLOW_THREADS;
|
||||||
ans = LIBMTP_Check_Specific_Device(busnum, devnum);
|
ans = LIBMTP_Check_Specific_Device(busnum, devnum);
|
||||||
Py_END_ALLOW_THREADS;
|
Py_END_ALLOW_THREADS;
|
||||||
@ -734,6 +727,7 @@ initlibmtp(void) {
|
|||||||
// who designs a library without anyway to control/redirect the debugging
|
// who designs a library without anyway to control/redirect the debugging
|
||||||
// output, and hardcoded paths that cannot be changed?
|
// output, and hardcoded paths that cannot be changed?
|
||||||
int bak, new;
|
int bak, new;
|
||||||
|
fprintf(stdout, "\n"); // This is needed, without it, for some odd reason the code below causes stdout to buffer all output after it is restored, rather than using line buffering, and setlinebuf does not work.
|
||||||
fflush(stdout);
|
fflush(stdout);
|
||||||
bak = dup(STDOUT_FILENO);
|
bak = dup(STDOUT_FILENO);
|
||||||
new = open("/dev/null", O_WRONLY);
|
new = open("/dev/null", O_WRONLY);
|
||||||
|
53
src/calibre/devices/mtp/unix/sysfs.py
Normal file
53
src/calibre/devices/mtp/unix/sysfs.py
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import os, glob
|
||||||
|
|
||||||
|
class MTPDetect(object):
|
||||||
|
|
||||||
|
SYSFS_PATH = os.environ.get('SYSFS_PATH', '/sys')
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.base = os.path.join(self.SYSFS_PATH, 'subsystem', 'usb', 'devices')
|
||||||
|
if not os.path.exists(self.base):
|
||||||
|
self.base = os.path.join(self.SYSFS_PATH, 'bus', 'usb', 'devices')
|
||||||
|
self.ok = os.path.exists(self.base)
|
||||||
|
|
||||||
|
def __call__(self, dev, debug=None):
|
||||||
|
'''
|
||||||
|
Check if the device has an interface named "MTP" using sysfs, which
|
||||||
|
avoids probing the device.
|
||||||
|
'''
|
||||||
|
if not self.ok: return False
|
||||||
|
|
||||||
|
def read(x):
|
||||||
|
try:
|
||||||
|
with open(x, 'rb') as f:
|
||||||
|
return f.read()
|
||||||
|
except EnvironmentError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
ipath = os.path.join(self.base, '{0}-*/{0}-*/interface'.format(dev.busnum))
|
||||||
|
for x in glob.glob(ipath):
|
||||||
|
raw = read(x)
|
||||||
|
if not raw or raw.strip() != b'MTP': continue
|
||||||
|
raw = read(os.path.join(os.path.dirname(os.path.dirname(x)),
|
||||||
|
'devnum'))
|
||||||
|
try:
|
||||||
|
if raw and int(raw) == dev.devnum:
|
||||||
|
if debug is not None:
|
||||||
|
debug('Unknown device {0} claims to be an MTP device'
|
||||||
|
.format(dev))
|
||||||
|
return True
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
continue
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
@ -300,19 +300,21 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
'particular IP address. The driver will listen only on the '
|
'particular IP address. The driver will listen only on the '
|
||||||
'entered address, and this address will be the one advertized '
|
'entered address, and this address will be the one advertized '
|
||||||
'over mDNS (bonjour).') + '</p>',
|
'over mDNS (bonjour).') + '</p>',
|
||||||
|
_('Replace books with the same calibre identifier') + ':::<p>' +
|
||||||
|
_('Use this option to overwrite a book on the device if that book '
|
||||||
|
'has the same calibre identifier as the book being sent. The file name of the '
|
||||||
|
'book will not change even if the save template produces a '
|
||||||
|
'different result. Using this option in most cases prevents '
|
||||||
|
'having multiple copies of a book on the device.') + '</p>',
|
||||||
]
|
]
|
||||||
EXTRA_CUSTOMIZATION_DEFAULT = [
|
EXTRA_CUSTOMIZATION_DEFAULT = [
|
||||||
False,
|
False, '',
|
||||||
'',
|
'', '',
|
||||||
'',
|
|
||||||
'',
|
|
||||||
False, '9090',
|
False, '9090',
|
||||||
False,
|
False, '',
|
||||||
'',
|
'', '',
|
||||||
'',
|
True, '',
|
||||||
'',
|
True
|
||||||
True,
|
|
||||||
''
|
|
||||||
]
|
]
|
||||||
OPT_AUTOSTART = 0
|
OPT_AUTOSTART = 0
|
||||||
OPT_PASSWORD = 2
|
OPT_PASSWORD = 2
|
||||||
@ -322,6 +324,7 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
OPT_COLLECTIONS = 8
|
OPT_COLLECTIONS = 8
|
||||||
OPT_AUTODISCONNECT = 10
|
OPT_AUTODISCONNECT = 10
|
||||||
OPT_FORCE_IP_ADDRESS = 11
|
OPT_FORCE_IP_ADDRESS = 11
|
||||||
|
OPT_OVERWRITE_BOOKS_UUID = 12
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, path):
|
def __init__(self, path):
|
||||||
@ -385,6 +388,20 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
fname = sanitize(fname)
|
fname = sanitize(fname)
|
||||||
ext = os.path.splitext(fname)[1]
|
ext = os.path.splitext(fname)[1]
|
||||||
|
|
||||||
|
try:
|
||||||
|
# If we have already seen this book's UUID, use the existing path
|
||||||
|
if self.settings().extra_customization[self.OPT_OVERWRITE_BOOKS_UUID]:
|
||||||
|
existing_book = self._uuid_already_on_device(mdata.uuid, ext)
|
||||||
|
if existing_book and existing_book.lpath:
|
||||||
|
return existing_book.lpath
|
||||||
|
|
||||||
|
# If the device asked for it, try to use the UUID as the file name.
|
||||||
|
# Fall back to the ch if the UUID doesn't exist.
|
||||||
|
if self.client_wants_uuid_file_names and mdata.uuid:
|
||||||
|
return (mdata.uuid + ext)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
maxlen = (self.MAX_PATH_LEN - (self.PATH_FUDGE_FACTOR +
|
maxlen = (self.MAX_PATH_LEN - (self.PATH_FUDGE_FACTOR +
|
||||||
self.exts_path_lengths.get(ext, self.PATH_FUDGE_FACTOR)))
|
self.exts_path_lengths.get(ext, self.PATH_FUDGE_FACTOR)))
|
||||||
|
|
||||||
@ -671,12 +688,24 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
return not v_thumb or v_thumb[1] == b_thumb[1]
|
return not v_thumb or v_thumb[1] == b_thumb[1]
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def _uuid_already_on_device(self, uuid, ext):
|
||||||
|
try:
|
||||||
|
return self.known_uuids.get(uuid + ext, None)
|
||||||
|
except:
|
||||||
|
return None
|
||||||
|
|
||||||
def _set_known_metadata(self, book, remove=False):
|
def _set_known_metadata(self, book, remove=False):
|
||||||
lpath = book.lpath
|
lpath = book.lpath
|
||||||
|
ext = os.path.splitext(lpath)[1]
|
||||||
|
uuid = book.get('uuid', None)
|
||||||
if remove:
|
if remove:
|
||||||
self.known_metadata.pop(lpath, None)
|
self.known_metadata.pop(lpath, None)
|
||||||
|
if uuid and ext:
|
||||||
|
self.known_uuids.pop(uuid+ext, None)
|
||||||
else:
|
else:
|
||||||
self.known_metadata[lpath] = book.deepcopy()
|
new_book = self.known_metadata[lpath] = book.deepcopy()
|
||||||
|
if uuid and ext:
|
||||||
|
self.known_uuids[uuid+ext] = new_book
|
||||||
|
|
||||||
def _close_device_socket(self):
|
def _close_device_socket(self):
|
||||||
if self.device_socket is not None:
|
if self.device_socket is not None:
|
||||||
@ -845,6 +874,10 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
self._close_device_socket()
|
self._close_device_socket()
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
self.client_wants_uuid_file_names = result.get('useUuidFileNames', False)
|
||||||
|
self._debug('Device wants UUID file names', self.client_wants_uuid_file_names)
|
||||||
|
|
||||||
|
|
||||||
config = self._configProxy()
|
config = self._configProxy()
|
||||||
config['format_map'] = exts
|
config['format_map'] = exts
|
||||||
self._debug('selected formats', config['format_map'])
|
self._debug('selected formats', config['format_map'])
|
||||||
@ -1085,6 +1118,7 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
for i, infile in enumerate(files):
|
for i, infile in enumerate(files):
|
||||||
mdata, fname = metadata.next(), names.next()
|
mdata, fname = metadata.next(), names.next()
|
||||||
lpath = self._create_upload_path(mdata, fname, create_dirs=False)
|
lpath = self._create_upload_path(mdata, fname, create_dirs=False)
|
||||||
|
self._debug('lpath', lpath)
|
||||||
if not hasattr(infile, 'read'):
|
if not hasattr(infile, 'read'):
|
||||||
infile = USBMS.normalize_path(infile)
|
infile = USBMS.normalize_path(infile)
|
||||||
book = SDBook(self.PREFIX, lpath, other=mdata)
|
book = SDBook(self.PREFIX, lpath, other=mdata)
|
||||||
@ -1246,6 +1280,7 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
self.device_socket = None
|
self.device_socket = None
|
||||||
self.json_codec = JsonCodec()
|
self.json_codec = JsonCodec()
|
||||||
self.known_metadata = {}
|
self.known_metadata = {}
|
||||||
|
self.known_uuids = {}
|
||||||
self.debug_time = time.time()
|
self.debug_time = time.time()
|
||||||
self.debug_start_time = time.time()
|
self.debug_start_time = time.time()
|
||||||
self.max_book_packet_len = 0
|
self.max_book_packet_len = 0
|
||||||
@ -1253,6 +1288,7 @@ class SMART_DEVICE_APP(DeviceConfig, DevicePlugin):
|
|||||||
self.connection_attempts = {}
|
self.connection_attempts = {}
|
||||||
self.client_can_stream_books = False
|
self.client_can_stream_books = False
|
||||||
self.client_can_stream_metadata = False
|
self.client_can_stream_metadata = False
|
||||||
|
self.client_wants_uuid_file_names = False
|
||||||
|
|
||||||
self._debug("All IP addresses", get_all_ips())
|
self._debug("All IP addresses", get_all_ips())
|
||||||
|
|
||||||
|
@ -8,11 +8,11 @@ __docformat__ = 'restructuredtext en'
|
|||||||
Convert OEB ebook format to PDF.
|
Convert OEB ebook format to PDF.
|
||||||
'''
|
'''
|
||||||
|
|
||||||
import glob
|
import glob, os
|
||||||
import os
|
|
||||||
|
|
||||||
from calibre.customize.conversion import OutputFormatPlugin, \
|
from calibre.constants import iswindows, islinux
|
||||||
OptionRecommendation
|
from calibre.customize.conversion import (OutputFormatPlugin,
|
||||||
|
OptionRecommendation)
|
||||||
from calibre.ptempfile import TemporaryDirectory
|
from calibre.ptempfile import TemporaryDirectory
|
||||||
|
|
||||||
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
|
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
|
||||||
@ -73,13 +73,13 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
' of stretching it to fill the full first page of the'
|
' of stretching it to fill the full first page of the'
|
||||||
' generated pdf.')),
|
' generated pdf.')),
|
||||||
OptionRecommendation(name='pdf_serif_family',
|
OptionRecommendation(name='pdf_serif_family',
|
||||||
recommended_value='Times New Roman', help=_(
|
recommended_value='Liberation Serif' if islinux else 'Times New Roman', help=_(
|
||||||
'The font family used to render serif fonts')),
|
'The font family used to render serif fonts')),
|
||||||
OptionRecommendation(name='pdf_sans_family',
|
OptionRecommendation(name='pdf_sans_family',
|
||||||
recommended_value='Helvetica', help=_(
|
recommended_value='Liberation Sans' if islinux else 'Helvetica', help=_(
|
||||||
'The font family used to render sans-serif fonts')),
|
'The font family used to render sans-serif fonts')),
|
||||||
OptionRecommendation(name='pdf_mono_family',
|
OptionRecommendation(name='pdf_mono_family',
|
||||||
recommended_value='Courier New', help=_(
|
recommended_value='Liberation Mono' if islinux else 'Courier New', help=_(
|
||||||
'The font family used to render monospaced fonts')),
|
'The font family used to render monospaced fonts')),
|
||||||
OptionRecommendation(name='pdf_standard_font', choices=['serif',
|
OptionRecommendation(name='pdf_standard_font', choices=['serif',
|
||||||
'sans', 'mono'],
|
'sans', 'mono'],
|
||||||
@ -102,6 +102,10 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
])
|
])
|
||||||
|
|
||||||
def convert(self, oeb_book, output_path, input_plugin, opts, log):
|
def convert(self, oeb_book, output_path, input_plugin, opts, log):
|
||||||
|
from calibre.gui2 import must_use_qt, load_builtin_fonts
|
||||||
|
must_use_qt()
|
||||||
|
load_builtin_fonts()
|
||||||
|
|
||||||
self.oeb = oeb_book
|
self.oeb = oeb_book
|
||||||
self.input_plugin, self.opts, self.log = input_plugin, opts, log
|
self.input_plugin, self.opts, self.log = input_plugin, opts, log
|
||||||
self.output_path = output_path
|
self.output_path = output_path
|
||||||
@ -135,9 +139,8 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
If you ever move to Qt WebKit 2.3+ then this will be unnecessary.
|
If you ever move to Qt WebKit 2.3+ then this will be unnecessary.
|
||||||
'''
|
'''
|
||||||
from calibre.ebooks.oeb.base import urlnormalize
|
from calibre.ebooks.oeb.base import urlnormalize
|
||||||
from calibre.gui2 import must_use_qt
|
from calibre.utils.fonts.utils import remove_embed_restriction
|
||||||
from calibre.utils.fonts.utils import get_font_names, remove_embed_restriction
|
from PyQt4.Qt import QFontDatabase, QByteArray, QRawFont, QFont
|
||||||
from PyQt4.Qt import QFontDatabase, QByteArray
|
|
||||||
|
|
||||||
# First find all @font-face rules and remove them, adding the embedded
|
# First find all @font-face rules and remove them, adding the embedded
|
||||||
# fonts to Qt
|
# fonts to Qt
|
||||||
@ -165,12 +168,13 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
raw = remove_embed_restriction(raw)
|
raw = remove_embed_restriction(raw)
|
||||||
except:
|
except:
|
||||||
continue
|
continue
|
||||||
must_use_qt()
|
fid = QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
||||||
QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
family_name = None
|
||||||
try:
|
if fid > -1:
|
||||||
family_name = get_font_names(raw)[0]
|
try:
|
||||||
except:
|
family_name = unicode(QFontDatabase.applicationFontFamilies(fid)[0])
|
||||||
family_name = None
|
except (IndexError, KeyError):
|
||||||
|
pass
|
||||||
if family_name:
|
if family_name:
|
||||||
family_map[icu_lower(font_family)] = family_name
|
family_map[icu_lower(font_family)] = family_name
|
||||||
|
|
||||||
@ -179,6 +183,7 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
|
|
||||||
# Now map the font family name specified in the css to the actual
|
# Now map the font family name specified in the css to the actual
|
||||||
# family name of the embedded font (they may be different in general).
|
# family name of the embedded font (they may be different in general).
|
||||||
|
font_warnings = set()
|
||||||
for item in self.oeb.manifest:
|
for item in self.oeb.manifest:
|
||||||
if not hasattr(item.data, 'cssRules'): continue
|
if not hasattr(item.data, 'cssRules'): continue
|
||||||
for i, rule in enumerate(item.data.cssRules):
|
for i, rule in enumerate(item.data.cssRules):
|
||||||
@ -187,9 +192,28 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
if ff is None: continue
|
if ff is None: continue
|
||||||
val = ff.propertyValue
|
val = ff.propertyValue
|
||||||
for i in xrange(val.length):
|
for i in xrange(val.length):
|
||||||
k = icu_lower(val[i].value)
|
try:
|
||||||
|
k = icu_lower(val[i].value)
|
||||||
|
except (AttributeError, TypeError):
|
||||||
|
val[i].value = k = 'times'
|
||||||
if k in family_map:
|
if k in family_map:
|
||||||
val[i].value = family_map[k]
|
val[i].value = family_map[k]
|
||||||
|
if iswindows:
|
||||||
|
# On windows, Qt uses GDI which does not support OpenType
|
||||||
|
# (CFF) fonts, so we need to nuke references to OpenType
|
||||||
|
# fonts. Note that you could compile QT with configure
|
||||||
|
# -directwrite, but that requires atleast Vista SP2
|
||||||
|
for i in xrange(val.length):
|
||||||
|
family = val[i].value
|
||||||
|
if family:
|
||||||
|
f = QRawFont.fromFont(QFont(family))
|
||||||
|
if len(f.fontTable('head')) == 0:
|
||||||
|
if family not in font_warnings:
|
||||||
|
self.log.warn('Ignoring unsupported font: %s'
|
||||||
|
%family)
|
||||||
|
font_warnings.add(family)
|
||||||
|
# Either a bitmap or (more likely) a CFF font
|
||||||
|
val[i].value = 'times'
|
||||||
|
|
||||||
def convert_text(self, oeb_book):
|
def convert_text(self, oeb_book):
|
||||||
from calibre.ebooks.metadata.opf2 import OPF
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
@ -232,7 +256,15 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
out_stream.seek(0)
|
out_stream.seek(0)
|
||||||
out_stream.truncate()
|
out_stream.truncate()
|
||||||
self.log.debug('Rendering pages to PDF...')
|
self.log.debug('Rendering pages to PDF...')
|
||||||
writer.dump(items, out_stream, PDFMetadata(self.metadata))
|
import time
|
||||||
|
st = time.time()
|
||||||
|
if False:
|
||||||
|
import cProfile
|
||||||
|
cProfile.runctx('writer.dump(items, out_stream, PDFMetadata(self.metadata))',
|
||||||
|
globals(), locals(), '/tmp/profile')
|
||||||
|
else:
|
||||||
|
writer.dump(items, out_stream, PDFMetadata(self.metadata))
|
||||||
|
self.log('Rendered PDF in %g seconds:'%(time.time()-st))
|
||||||
|
|
||||||
if close:
|
if close:
|
||||||
out_stream.close()
|
out_stream.close()
|
||||||
|
@ -11,13 +11,17 @@ import struct, os, functools, re
|
|||||||
from urlparse import urldefrag
|
from urlparse import urldefrag
|
||||||
from cStringIO import StringIO
|
from cStringIO import StringIO
|
||||||
from urllib import unquote as urlunquote
|
from urllib import unquote as urlunquote
|
||||||
|
|
||||||
|
from lxml import etree
|
||||||
|
|
||||||
from calibre.ebooks.lit import LitError
|
from calibre.ebooks.lit import LitError
|
||||||
from calibre.ebooks.lit.maps import OPF_MAP, HTML_MAP
|
from calibre.ebooks.lit.maps import OPF_MAP, HTML_MAP
|
||||||
import calibre.ebooks.lit.mssha1 as mssha1
|
import calibre.ebooks.lit.mssha1 as mssha1
|
||||||
from calibre.ebooks.oeb.base import urlnormalize
|
from calibre.ebooks.oeb.base import urlnormalize, xpath
|
||||||
from calibre.ebooks.oeb.reader import OEBReader
|
from calibre.ebooks.oeb.reader import OEBReader
|
||||||
from calibre.ebooks import DRMError
|
from calibre.ebooks import DRMError
|
||||||
from calibre import plugins
|
from calibre import plugins
|
||||||
|
|
||||||
lzx, lxzerror = plugins['lzx']
|
lzx, lxzerror = plugins['lzx']
|
||||||
msdes, msdeserror = plugins['msdes']
|
msdes, msdeserror = plugins['msdes']
|
||||||
|
|
||||||
@ -907,3 +911,16 @@ class LitReader(OEBReader):
|
|||||||
Container = LitContainer
|
Container = LitContainer
|
||||||
DEFAULT_PROFILE = 'MSReader'
|
DEFAULT_PROFILE = 'MSReader'
|
||||||
|
|
||||||
|
def _spine_from_opf(self, opf):
|
||||||
|
manifest = self.oeb.manifest
|
||||||
|
for elem in xpath(opf, '/o2:package/o2:spine/o2:itemref'):
|
||||||
|
idref = elem.get('idref')
|
||||||
|
if idref not in manifest.ids:
|
||||||
|
continue
|
||||||
|
item = manifest.ids[idref]
|
||||||
|
if (item.media_type.lower() == 'application/xml' and
|
||||||
|
hasattr(item.data, 'xpath') and item.data.xpath('/html')):
|
||||||
|
item.media_type = 'application/xhtml+xml'
|
||||||
|
item.data = item._parse_xhtml(etree.tostring(item.data))
|
||||||
|
super(LitReader, self)._spine_from_opf(opf)
|
||||||
|
|
||||||
|
@ -41,7 +41,6 @@ def find_custom_fonts(options, logger):
|
|||||||
if options.serif_family:
|
if options.serif_family:
|
||||||
f = family(options.serif_family)
|
f = family(options.serif_family)
|
||||||
fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
|
fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
|
||||||
print (111111, fonts['serif'])
|
|
||||||
if not fonts['serif']:
|
if not fonts['serif']:
|
||||||
logger.warn('Unable to find serif family %s'%f)
|
logger.warn('Unable to find serif family %s'%f)
|
||||||
if options.sans_family:
|
if options.sans_family:
|
||||||
|
@ -291,6 +291,8 @@ def set_metadata(stream, mi, apply_null=False, update_timestamp=False):
|
|||||||
|
|
||||||
|
|
||||||
reader.opf.smart_update(mi)
|
reader.opf.smart_update(mi)
|
||||||
|
if getattr(mi, 'uuid', None):
|
||||||
|
reader.opf.application_id = mi.uuid
|
||||||
if apply_null:
|
if apply_null:
|
||||||
if not getattr(mi, 'series', None):
|
if not getattr(mi, 'series', None):
|
||||||
reader.opf.series = None
|
reader.opf.series = None
|
||||||
|
@ -390,6 +390,10 @@ class MetadataUpdater(object):
|
|||||||
not added_501 and not share_not_sync):
|
not added_501 and not share_not_sync):
|
||||||
from uuid import uuid4
|
from uuid import uuid4
|
||||||
update_exth_record((113, str(uuid4())))
|
update_exth_record((113, str(uuid4())))
|
||||||
|
# Add a 112 record with actual UUID
|
||||||
|
if getattr(mi, 'uuid', None):
|
||||||
|
update_exth_record((112,
|
||||||
|
(u"calibre:%s" % mi.uuid).encode(self.codec, 'replace')))
|
||||||
if 503 in self.original_exth_records:
|
if 503 in self.original_exth_records:
|
||||||
update_exth_record((503, mi.title.encode(self.codec, 'replace')))
|
update_exth_record((503, mi.title.encode(self.codec, 'replace')))
|
||||||
|
|
||||||
|
@ -941,12 +941,11 @@ class OPF(object): # {{{
|
|||||||
return self.get_text(match) or None
|
return self.get_text(match) or None
|
||||||
|
|
||||||
def fset(self, val):
|
def fset(self, val):
|
||||||
matches = self.application_id_path(self.metadata)
|
for x in tuple(self.application_id_path(self.metadata)):
|
||||||
if not matches:
|
x.getparent().remove(x)
|
||||||
attrib = {'{%s}scheme'%self.NAMESPACES['opf']: 'calibre'}
|
attrib = {'{%s}scheme'%self.NAMESPACES['opf']: 'calibre'}
|
||||||
matches = [self.create_metadata_element('identifier',
|
self.set_text(self.create_metadata_element(
|
||||||
attrib=attrib)]
|
'identifier', attrib=attrib), unicode(val))
|
||||||
self.set_text(matches[0], unicode(val))
|
|
||||||
|
|
||||||
return property(fget=fget, fset=fset)
|
return property(fget=fget, fset=fset)
|
||||||
|
|
||||||
|
@ -110,6 +110,12 @@ def build_exth(metadata, prefer_author_sort=False, is_periodical=False,
|
|||||||
exth.write(uuid)
|
exth.write(uuid)
|
||||||
nrecs += 1
|
nrecs += 1
|
||||||
|
|
||||||
|
# Write UUID as SOURCE
|
||||||
|
c_uuid = b'calibre:%s' % uuid
|
||||||
|
exth.write(pack(b'>II', 112, len(c_uuid) + 8))
|
||||||
|
exth.write(c_uuid)
|
||||||
|
nrecs += 1
|
||||||
|
|
||||||
# Write cdetype
|
# Write cdetype
|
||||||
if not is_periodical:
|
if not is_periodical:
|
||||||
if not share_not_sync:
|
if not share_not_sync:
|
||||||
|
@ -115,8 +115,11 @@ class MergeMetadata(object):
|
|||||||
if mi.uuid is not None:
|
if mi.uuid is not None:
|
||||||
m.filter('identifier', lambda x:x.id=='uuid_id')
|
m.filter('identifier', lambda x:x.id=='uuid_id')
|
||||||
self.oeb.metadata.add('identifier', mi.uuid, id='uuid_id',
|
self.oeb.metadata.add('identifier', mi.uuid, id='uuid_id',
|
||||||
scheme='uuid')
|
scheme='uuid')
|
||||||
self.oeb.uid = self.oeb.metadata.identifier[-1]
|
self.oeb.uid = self.oeb.metadata.identifier[-1]
|
||||||
|
if mi.application_id is not None:
|
||||||
|
m.filter('identifier', lambda x:x.scheme=='calibre')
|
||||||
|
self.oeb.metadata.add('identifier', mi.application_id, scheme='calibre')
|
||||||
|
|
||||||
def set_cover(self, mi, prefer_metadata_cover):
|
def set_cover(self, mi, prefer_metadata_cover):
|
||||||
cdata, ext = '', 'jpg'
|
cdata, ext = '', 'jpg'
|
||||||
|
@ -36,7 +36,15 @@ class SubsetFonts(object):
|
|||||||
self.oeb.manifest.remove(font['item'])
|
self.oeb.manifest.remove(font['item'])
|
||||||
font['rule'].parentStyleSheet.deleteRule(font['rule'])
|
font['rule'].parentStyleSheet.deleteRule(font['rule'])
|
||||||
|
|
||||||
|
fonts = {}
|
||||||
for font in self.embedded_fonts:
|
for font in self.embedded_fonts:
|
||||||
|
item, chars = font['item'], font['chars']
|
||||||
|
if item.href in fonts:
|
||||||
|
fonts[item.href]['chars'] |= chars
|
||||||
|
else:
|
||||||
|
fonts[item.href] = font
|
||||||
|
|
||||||
|
for font in fonts.itervalues():
|
||||||
if not font['chars']:
|
if not font['chars']:
|
||||||
self.log('The font %s is unused. Removing it.'%font['src'])
|
self.log('The font %s is unused. Removing it.'%font['src'])
|
||||||
remove(font)
|
remove(font)
|
||||||
|
@ -9,8 +9,11 @@ __docformat__ = 'restructuredtext en'
|
|||||||
|
|
||||||
import codecs, zlib
|
import codecs, zlib
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
from struct import pack
|
from datetime import datetime
|
||||||
from decimal import Decimal
|
|
||||||
|
from calibre.constants import plugins, ispy3
|
||||||
|
|
||||||
|
pdf_float = plugins['speedup'][0].pdf_float
|
||||||
|
|
||||||
EOL = b'\n'
|
EOL = b'\n'
|
||||||
|
|
||||||
@ -52,32 +55,31 @@ PAPER_SIZES = {k:globals()[k.upper()] for k in ('a0 a1 a2 a3 a4 a5 a6 b0 b1 b2'
|
|||||||
|
|
||||||
# Basic PDF datatypes {{{
|
# Basic PDF datatypes {{{
|
||||||
|
|
||||||
def format_float(f):
|
ic = str if ispy3 else unicode
|
||||||
if abs(f) < 1e-7:
|
icb = (lambda x: str(x).encode('ascii')) if ispy3 else bytes
|
||||||
return '0'
|
|
||||||
places = 6
|
|
||||||
a, b = type(u'')(Decimal(f).quantize(Decimal(10)**-places)).partition('.')[0::2]
|
|
||||||
b = b.rstrip('0')
|
|
||||||
if not b:
|
|
||||||
return '0' if a == '-0' else a
|
|
||||||
return '%s.%s'%(a, b)
|
|
||||||
|
|
||||||
def fmtnum(o):
|
def fmtnum(o):
|
||||||
if isinstance(o, (int, long)):
|
if isinstance(o, float):
|
||||||
return type(u'')(o)
|
return pdf_float(o)
|
||||||
return format_float(o)
|
return ic(o)
|
||||||
|
|
||||||
def serialize(o, stream):
|
def serialize(o, stream):
|
||||||
if hasattr(o, 'pdf_serialize'):
|
if isinstance(o, float):
|
||||||
o.pdf_serialize(stream)
|
stream.write_raw(pdf_float(o).encode('ascii'))
|
||||||
elif isinstance(o, bool):
|
elif isinstance(o, bool):
|
||||||
stream.write(b'true' if o else b'false')
|
# Must check bool before int as bools are subclasses of int
|
||||||
|
stream.write_raw(b'true' if o else b'false')
|
||||||
elif isinstance(o, (int, long)):
|
elif isinstance(o, (int, long)):
|
||||||
stream.write(type(u'')(o).encode('ascii'))
|
stream.write_raw(icb(o))
|
||||||
elif isinstance(o, float):
|
elif hasattr(o, 'pdf_serialize'):
|
||||||
stream.write(format_float(o).encode('ascii'))
|
o.pdf_serialize(stream)
|
||||||
elif o is None:
|
elif o is None:
|
||||||
stream.write(b'null')
|
stream.write_raw(b'null')
|
||||||
|
elif isinstance(o, datetime):
|
||||||
|
val = o.strftime("D:%Y%m%d%H%M%%02d%z")%min(59, o.second)
|
||||||
|
if datetime.tzinfo is not None:
|
||||||
|
val = "(%s'%s')"%(val[:-2], val[-2:])
|
||||||
|
stream.write(val.encode('ascii'))
|
||||||
else:
|
else:
|
||||||
raise ValueError('Unknown object: %r'%o)
|
raise ValueError('Unknown object: %r'%o)
|
||||||
|
|
||||||
@ -103,13 +105,6 @@ class String(unicode):
|
|||||||
raw = codecs.BOM_UTF16_BE + s.encode('utf-16-be')
|
raw = codecs.BOM_UTF16_BE + s.encode('utf-16-be')
|
||||||
stream.write(b'('+raw+b')')
|
stream.write(b'('+raw+b')')
|
||||||
|
|
||||||
class GlyphIndex(int):
|
|
||||||
|
|
||||||
def pdf_serialize(self, stream):
|
|
||||||
byts = bytearray(pack(b'>H', self))
|
|
||||||
stream.write('<%s>'%''.join(map(
|
|
||||||
lambda x: bytes(hex(x)[2:]).rjust(2, b'0'), byts)))
|
|
||||||
|
|
||||||
class Dictionary(dict):
|
class Dictionary(dict):
|
||||||
|
|
||||||
def pdf_serialize(self, stream):
|
def pdf_serialize(self, stream):
|
||||||
@ -180,6 +175,9 @@ class Stream(BytesIO):
|
|||||||
super(Stream, self).write(raw if isinstance(raw, bytes) else
|
super(Stream, self).write(raw if isinstance(raw, bytes) else
|
||||||
raw.encode('ascii'))
|
raw.encode('ascii'))
|
||||||
|
|
||||||
|
def write_raw(self, raw):
|
||||||
|
BytesIO.write(self, raw)
|
||||||
|
|
||||||
class Reference(object):
|
class Reference(object):
|
||||||
|
|
||||||
def __init__(self, num, obj):
|
def __init__(self, num, obj):
|
||||||
|
@ -13,15 +13,13 @@ from functools import wraps, partial
|
|||||||
from future_builtins import map
|
from future_builtins import map
|
||||||
|
|
||||||
import sip
|
import sip
|
||||||
from PyQt4.Qt import (QPaintEngine, QPaintDevice, Qt, QApplication, QPainter,
|
from PyQt4.Qt import (QPaintEngine, QPaintDevice, Qt, QTransform, QBrush)
|
||||||
QTransform, QImage, QByteArray, QBuffer,
|
|
||||||
qRgba)
|
|
||||||
|
|
||||||
from calibre.constants import plugins
|
from calibre.constants import plugins
|
||||||
from calibre.ebooks.pdf.render.serialize import (PDFStream, Path)
|
from calibre.ebooks.pdf.render.serialize import (PDFStream, Path)
|
||||||
from calibre.ebooks.pdf.render.common import inch, A4, fmtnum
|
from calibre.ebooks.pdf.render.common import inch, A4, fmtnum
|
||||||
from calibre.ebooks.pdf.render.graphics import convert_path, Graphics
|
from calibre.ebooks.pdf.render.graphics import convert_path, Graphics
|
||||||
from calibre.utils.fonts.sfnt.container import Sfnt
|
from calibre.utils.fonts.sfnt.container import Sfnt, UnsupportedFont
|
||||||
from calibre.utils.fonts.sfnt.metrics import FontMetrics
|
from calibre.utils.fonts.sfnt.metrics import FontMetrics
|
||||||
|
|
||||||
Point = namedtuple('Point', 'x y')
|
Point = namedtuple('Point', 'x y')
|
||||||
@ -51,11 +49,18 @@ class Font(FontMetrics):
|
|||||||
|
|
||||||
class PdfEngine(QPaintEngine):
|
class PdfEngine(QPaintEngine):
|
||||||
|
|
||||||
|
FEATURES = QPaintEngine.AllFeatures & ~(
|
||||||
|
QPaintEngine.PorterDuff | QPaintEngine.PerspectiveTransform
|
||||||
|
| QPaintEngine.ObjectBoundingModeGradients
|
||||||
|
| QPaintEngine.RadialGradientFill
|
||||||
|
| QPaintEngine.ConicalGradientFill
|
||||||
|
)
|
||||||
|
|
||||||
def __init__(self, file_object, page_width, page_height, left_margin,
|
def __init__(self, file_object, page_width, page_height, left_margin,
|
||||||
top_margin, right_margin, bottom_margin, width, height,
|
top_margin, right_margin, bottom_margin, width, height,
|
||||||
errors=print, debug=print, compress=True,
|
errors=print, debug=print, compress=True,
|
||||||
mark_links=False):
|
mark_links=False):
|
||||||
QPaintEngine.__init__(self, self.features)
|
QPaintEngine.__init__(self, self.FEATURES)
|
||||||
self.file_object = file_object
|
self.file_object = file_object
|
||||||
self.compress, self.mark_links = compress, mark_links
|
self.compress, self.mark_links = compress, mark_links
|
||||||
self.page_height, self.page_width = page_height, page_width
|
self.page_height, self.page_width = page_height, page_width
|
||||||
@ -76,13 +81,10 @@ class PdfEngine(QPaintEngine):
|
|||||||
self.bottom_margin) / self.pixel_height
|
self.bottom_margin) / self.pixel_height
|
||||||
|
|
||||||
self.pdf_system = QTransform(sx, 0, 0, -sy, dx, dy)
|
self.pdf_system = QTransform(sx, 0, 0, -sy, dx, dy)
|
||||||
self.graphics = Graphics()
|
self.graphics = Graphics(self.pixel_width, self.pixel_height)
|
||||||
self.errors_occurred = False
|
self.errors_occurred = False
|
||||||
self.errors, self.debug = errors, debug
|
self.errors, self.debug = errors, debug
|
||||||
self.fonts = {}
|
self.fonts = {}
|
||||||
i = QImage(1, 1, QImage.Format_ARGB32)
|
|
||||||
i.fill(qRgba(0, 0, 0, 255))
|
|
||||||
self.alpha_bit = i.constBits().asstring(4).find(b'\xff')
|
|
||||||
self.current_page_num = 1
|
self.current_page_num = 1
|
||||||
self.current_page_inited = False
|
self.current_page_inited = False
|
||||||
self.qt_hack, err = plugins['qt_hack']
|
self.qt_hack, err = plugins['qt_hack']
|
||||||
@ -90,7 +92,11 @@ class PdfEngine(QPaintEngine):
|
|||||||
raise RuntimeError('Failed to load qt_hack with err: %s'%err)
|
raise RuntimeError('Failed to load qt_hack with err: %s'%err)
|
||||||
|
|
||||||
def apply_graphics_state(self):
|
def apply_graphics_state(self):
|
||||||
self.graphics(self.pdf, self.pdf_system, self.painter())
|
self.graphics(self.pdf_system, self.painter())
|
||||||
|
|
||||||
|
def resolve_fill(self, rect):
|
||||||
|
self.graphics.resolve_fill(rect, self.pdf_system,
|
||||||
|
self.painter().transform())
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def do_fill(self):
|
def do_fill(self):
|
||||||
@ -102,18 +108,10 @@ class PdfEngine(QPaintEngine):
|
|||||||
|
|
||||||
def init_page(self):
|
def init_page(self):
|
||||||
self.pdf.transform(self.pdf_system)
|
self.pdf.transform(self.pdf_system)
|
||||||
self.pdf.set_rgb_colorspace()
|
|
||||||
self.graphics.reset()
|
self.graphics.reset()
|
||||||
self.pdf.save_stack()
|
self.pdf.save_stack()
|
||||||
self.current_page_inited = True
|
self.current_page_inited = True
|
||||||
|
|
||||||
@property
|
|
||||||
def features(self):
|
|
||||||
# gradient_flags = self.MaskedBrush | self.PatternBrush | self.PatternTransform
|
|
||||||
return (self.Antialiasing | self.AlphaBlend | self.ConstantOpacity |
|
|
||||||
self.PainterPaths | self.PaintOutsidePaintEvent |
|
|
||||||
self.PrimitiveTransform | self.PixmapTransform) #| gradient_flags
|
|
||||||
|
|
||||||
def begin(self, device):
|
def begin(self, device):
|
||||||
if not hasattr(self, 'pdf'):
|
if not hasattr(self, 'pdf'):
|
||||||
try:
|
try:
|
||||||
@ -121,6 +119,7 @@ class PdfEngine(QPaintEngine):
|
|||||||
self.page_height), compress=self.compress,
|
self.page_height), compress=self.compress,
|
||||||
mark_links=self.mark_links,
|
mark_links=self.mark_links,
|
||||||
debug=self.debug)
|
debug=self.debug)
|
||||||
|
self.graphics.begin(self.pdf)
|
||||||
except:
|
except:
|
||||||
self.errors(traceback.format_exc())
|
self.errors(traceback.format_exc())
|
||||||
self.errors_occurred = True
|
self.errors_occurred = True
|
||||||
@ -149,7 +148,23 @@ class PdfEngine(QPaintEngine):
|
|||||||
def type(self):
|
def type(self):
|
||||||
return QPaintEngine.Pdf
|
return QPaintEngine.Pdf
|
||||||
|
|
||||||
# TODO: Tiled pixmap
|
def add_image(self, img, cache_key):
|
||||||
|
if img.isNull(): return
|
||||||
|
return self.pdf.add_image(img, cache_key)
|
||||||
|
|
||||||
|
@store_error
|
||||||
|
def drawTiledPixmap(self, rect, pixmap, point):
|
||||||
|
self.apply_graphics_state()
|
||||||
|
brush = QBrush(pixmap)
|
||||||
|
bl = rect.topLeft()
|
||||||
|
color, opacity, pattern, do_fill = self.graphics.convert_brush(
|
||||||
|
brush, bl-point, 1.0, self.pdf_system,
|
||||||
|
self.painter().transform())
|
||||||
|
self.pdf.save_stack()
|
||||||
|
self.pdf.apply_fill(color, pattern)
|
||||||
|
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
||||||
|
stroke=False, fill=True)
|
||||||
|
self.pdf.restore_stack()
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawPixmap(self, rect, pixmap, source_rect):
|
def drawPixmap(self, rect, pixmap, source_rect):
|
||||||
@ -160,8 +175,8 @@ class PdfEngine(QPaintEngine):
|
|||||||
image = pixmap.toImage()
|
image = pixmap.toImage()
|
||||||
ref = self.add_image(image, pixmap.cacheKey())
|
ref = self.add_image(image, pixmap.cacheKey())
|
||||||
if ref is not None:
|
if ref is not None:
|
||||||
self.pdf.draw_image(rect.x(), rect.height()+rect.y(), rect.width(),
|
self.pdf.draw_image(rect.x(), rect.y(), rect.width(),
|
||||||
-rect.height(), ref)
|
rect.height(), ref)
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawImage(self, rect, image, source_rect, flags=Qt.AutoColor):
|
def drawImage(self, rect, image, source_rect, flags=Qt.AutoColor):
|
||||||
@ -171,72 +186,8 @@ class PdfEngine(QPaintEngine):
|
|||||||
image.copy(source_rect))
|
image.copy(source_rect))
|
||||||
ref = self.add_image(image, image.cacheKey())
|
ref = self.add_image(image, image.cacheKey())
|
||||||
if ref is not None:
|
if ref is not None:
|
||||||
self.pdf.draw_image(rect.x(), rect.height()+rect.y(), rect.width(),
|
self.pdf.draw_image(rect.x(), rect.y(), rect.width(),
|
||||||
-rect.height(), ref)
|
rect.height(), ref)
|
||||||
|
|
||||||
def add_image(self, img, cache_key):
|
|
||||||
if img.isNull(): return
|
|
||||||
ref = self.pdf.get_image(cache_key)
|
|
||||||
if ref is not None:
|
|
||||||
return ref
|
|
||||||
|
|
||||||
fmt = img.format()
|
|
||||||
image = QImage(img)
|
|
||||||
if (image.depth() == 1 and img.colorTable().size() == 2 and
|
|
||||||
img.colorTable().at(0) == QColor(Qt.black).rgba() and
|
|
||||||
img.colorTable().at(1) == QColor(Qt.white).rgba()):
|
|
||||||
if fmt == QImage.Format_MonoLSB:
|
|
||||||
image = image.convertToFormat(QImage.Format_Mono)
|
|
||||||
fmt = QImage.Format_Mono
|
|
||||||
else:
|
|
||||||
if (fmt != QImage.Format_RGB32 and fmt != QImage.Format_ARGB32):
|
|
||||||
image = image.convertToFormat(QImage.Format_ARGB32)
|
|
||||||
fmt = QImage.Format_ARGB32
|
|
||||||
|
|
||||||
w = image.width()
|
|
||||||
h = image.height()
|
|
||||||
d = image.depth()
|
|
||||||
|
|
||||||
if fmt == QImage.Format_Mono:
|
|
||||||
bytes_per_line = (w + 7) >> 3
|
|
||||||
data = image.constBits().asstring(bytes_per_line * h)
|
|
||||||
return self.pdf.write_image(data, w, h, d, cache_key=cache_key)
|
|
||||||
|
|
||||||
ba = QByteArray()
|
|
||||||
buf = QBuffer(ba)
|
|
||||||
image.save(buf, 'jpeg', 94)
|
|
||||||
data = bytes(ba.data())
|
|
||||||
has_alpha = has_mask = False
|
|
||||||
soft_mask = mask = None
|
|
||||||
|
|
||||||
if fmt == QImage.Format_ARGB32:
|
|
||||||
tmask = image.constBits().asstring(4*w*h)[self.alpha_bit::4]
|
|
||||||
sdata = bytearray(tmask)
|
|
||||||
vals = set(sdata)
|
|
||||||
vals.discard(255)
|
|
||||||
has_mask = bool(vals)
|
|
||||||
vals.discard(0)
|
|
||||||
has_alpha = bool(vals)
|
|
||||||
|
|
||||||
if has_alpha:
|
|
||||||
soft_mask = self.pdf.write_image(tmask, w, h, 8)
|
|
||||||
elif has_mask:
|
|
||||||
# dither the soft mask to 1bit and add it. This also helps PDF
|
|
||||||
# viewers without transparency support
|
|
||||||
bytes_per_line = (w + 7) >> 3
|
|
||||||
mdata = bytearray(0 for i in xrange(bytes_per_line * h))
|
|
||||||
spos = mpos = 0
|
|
||||||
for y in xrange(h):
|
|
||||||
for x in xrange(w):
|
|
||||||
if sdata[spos]:
|
|
||||||
mdata[mpos + x>>3] |= (0x80 >> (x&7))
|
|
||||||
spos += 1
|
|
||||||
mpos += bytes_per_line
|
|
||||||
mdata = bytes(mdata)
|
|
||||||
mask = self.pdf.write_image(mdata, w, h, 1)
|
|
||||||
|
|
||||||
return self.pdf.write_image(data, w, h, 32, mask=mask, dct=True,
|
|
||||||
soft_mask=soft_mask, cache_key=cache_key)
|
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def updateState(self, state):
|
def updateState(self, state):
|
||||||
@ -263,14 +214,20 @@ class PdfEngine(QPaintEngine):
|
|||||||
@store_error
|
@store_error
|
||||||
def drawRects(self, rects):
|
def drawRects(self, rects):
|
||||||
self.apply_graphics_state()
|
self.apply_graphics_state()
|
||||||
for rect in rects:
|
with self.graphics:
|
||||||
bl = rect.topLeft()
|
for rect in rects:
|
||||||
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
self.resolve_fill(rect)
|
||||||
stroke=self.do_stroke, fill=self.do_fill)
|
bl = rect.topLeft()
|
||||||
|
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
||||||
|
stroke=self.do_stroke, fill=self.do_fill)
|
||||||
|
|
||||||
def create_sfnt(self, text_item):
|
def create_sfnt(self, text_item):
|
||||||
get_table = partial(self.qt_hack.get_sfnt_table, text_item)
|
get_table = partial(self.qt_hack.get_sfnt_table, text_item)
|
||||||
ans = Font(Sfnt(get_table))
|
try:
|
||||||
|
ans = Font(Sfnt(get_table))
|
||||||
|
except UnsupportedFont as e:
|
||||||
|
raise UnsupportedFont('The font %s is not a valid sfnt. Error: %s'%(
|
||||||
|
text_item.font().family(), e))
|
||||||
glyph_map = self.qt_hack.get_glyph_map(text_item)
|
glyph_map = self.qt_hack.get_glyph_map(text_item)
|
||||||
gm = {}
|
gm = {}
|
||||||
for uc, glyph_id in enumerate(glyph_map):
|
for uc, glyph_id in enumerate(glyph_map):
|
||||||
@ -281,7 +238,7 @@ class PdfEngine(QPaintEngine):
|
|||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawTextItem(self, point, text_item):
|
def drawTextItem(self, point, text_item):
|
||||||
# super(PdfEngine, self).drawTextItem(point, text_item)
|
# return super(PdfEngine, self).drawTextItem(point, text_item)
|
||||||
self.apply_graphics_state()
|
self.apply_graphics_state()
|
||||||
gi = self.qt_hack.get_glyphs(point, text_item)
|
gi = self.qt_hack.get_glyphs(point, text_item)
|
||||||
if not gi.indices:
|
if not gi.indices:
|
||||||
@ -289,7 +246,10 @@ class PdfEngine(QPaintEngine):
|
|||||||
return
|
return
|
||||||
name = hash(bytes(gi.name))
|
name = hash(bytes(gi.name))
|
||||||
if name not in self.fonts:
|
if name not in self.fonts:
|
||||||
self.fonts[name] = self.create_sfnt(text_item)
|
try:
|
||||||
|
self.fonts[name] = self.create_sfnt(text_item)
|
||||||
|
except UnsupportedFont:
|
||||||
|
return super(PdfEngine, self).drawTextItem(point, text_item)
|
||||||
metrics = self.fonts[name]
|
metrics = self.fonts[name]
|
||||||
for glyph_id in gi.indices:
|
for glyph_id in gi.indices:
|
||||||
try:
|
try:
|
||||||
@ -297,18 +257,14 @@ class PdfEngine(QPaintEngine):
|
|||||||
except (KeyError, ValueError):
|
except (KeyError, ValueError):
|
||||||
pass
|
pass
|
||||||
glyphs = []
|
glyphs = []
|
||||||
pdf_pos = point
|
last_x = last_y = 0
|
||||||
first_baseline = None
|
|
||||||
for i, pos in enumerate(gi.positions):
|
for i, pos in enumerate(gi.positions):
|
||||||
if first_baseline is None:
|
x, y = pos.x(), pos.y()
|
||||||
first_baseline = pos.y()
|
glyphs.append((x-last_x, last_y - y, gi.indices[i]))
|
||||||
glyph_pos = pos
|
last_x, last_y = x, y
|
||||||
delta = glyph_pos - pdf_pos
|
|
||||||
glyphs.append((delta.x(), pos.y()-first_baseline, gi.indices[i]))
|
|
||||||
pdf_pos = glyph_pos
|
|
||||||
|
|
||||||
self.pdf.draw_glyph_run([1, 0, 0, -1, point.x(),
|
self.pdf.draw_glyph_run([gi.stretch, 0, 0, -1, 0, 0], gi.size, metrics,
|
||||||
point.y()], gi.size, metrics, glyphs)
|
glyphs)
|
||||||
sip.delete(gi)
|
sip.delete(gi)
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
@ -388,8 +344,8 @@ class PdfDevice(QPaintDevice): # {{{
|
|||||||
return int(round(self.body_height * self.ydpi / 72.0))
|
return int(round(self.body_height * self.ydpi / 72.0))
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
def end_page(self):
|
def end_page(self, *args, **kwargs):
|
||||||
self.engine.end_page()
|
self.engine.end_page(*args, **kwargs)
|
||||||
|
|
||||||
def init_page(self):
|
def init_page(self):
|
||||||
self.engine.init_page()
|
self.engine.init_page()
|
||||||
@ -411,55 +367,4 @@ class PdfDevice(QPaintDevice): # {{{
|
|||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath)
|
|
||||||
QBrush, QColor, QPoint, QPixmap, QPainterPath
|
|
||||||
app = QApplication([])
|
|
||||||
p = QPainter()
|
|
||||||
with open('/t/painter.pdf', 'wb') as f:
|
|
||||||
dev = PdfDevice(f, compress=False)
|
|
||||||
p.begin(dev)
|
|
||||||
dev.init_page()
|
|
||||||
xmax, ymax = p.viewport().width(), p.viewport().height()
|
|
||||||
b = p.brush()
|
|
||||||
try:
|
|
||||||
p.drawRect(0, 0, xmax, ymax)
|
|
||||||
# p.drawPolyline(QPoint(0, 0), QPoint(xmax, 0), QPoint(xmax, ymax),
|
|
||||||
# QPoint(0, ymax), QPoint(0, 0))
|
|
||||||
# pp = QPainterPath()
|
|
||||||
# pp.addRect(0, 0, xmax, ymax)
|
|
||||||
# p.drawPath(pp)
|
|
||||||
p.save()
|
|
||||||
for i in xrange(3):
|
|
||||||
col = [0, 0, 0, 200]
|
|
||||||
col[i] = 255
|
|
||||||
p.setOpacity(0.3)
|
|
||||||
p.fillRect(0, 0, xmax/10, xmax/10, QBrush(QColor(*col)))
|
|
||||||
p.setOpacity(1)
|
|
||||||
p.drawRect(0, 0, xmax/10, xmax/10)
|
|
||||||
p.translate(xmax/10, xmax/10)
|
|
||||||
p.scale(1, 1.5)
|
|
||||||
p.restore()
|
|
||||||
|
|
||||||
# p.scale(2, 2)
|
|
||||||
# p.rotate(45)
|
|
||||||
p.drawPixmap(0, 0, 2048, 2048, QPixmap(I('library.png')))
|
|
||||||
p.drawRect(0, 0, 2048, 2048)
|
|
||||||
|
|
||||||
f = p.font()
|
|
||||||
f.setPointSize(20)
|
|
||||||
# f.setLetterSpacing(f.PercentageSpacing, 200)
|
|
||||||
# f.setUnderline(True)
|
|
||||||
# f.setOverline(True)
|
|
||||||
# f.setStrikeOut(True)
|
|
||||||
f.setFamily('Calibri')
|
|
||||||
p.setFont(f)
|
|
||||||
# p.setPen(QColor(0, 0, 255))
|
|
||||||
# p.scale(2, 2)
|
|
||||||
# p.rotate(45)
|
|
||||||
p.drawText(QPoint(300, 300), 'Some—text not By’s ū --- Д AV ff ff')
|
|
||||||
finally:
|
|
||||||
p.end()
|
|
||||||
if dev.engine.errors_occurred:
|
|
||||||
raise SystemExit(1)
|
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ def get_page_size(opts, for_comic=False): # {{{
|
|||||||
if opts.unit == 'devicepixel':
|
if opts.unit == 'devicepixel':
|
||||||
factor = 72.0 / opts.output_profile.dpi
|
factor = 72.0 / opts.output_profile.dpi
|
||||||
else:
|
else:
|
||||||
{'point':1.0, 'inch':inch, 'cicero':cicero,
|
factor = {'point':1.0, 'inch':inch, 'cicero':cicero,
|
||||||
'didot':didot, 'pica':pica, 'millimeter':mm,
|
'didot':didot, 'pica':pica, 'millimeter':mm,
|
||||||
'centimeter':cm}[opts.unit]
|
'centimeter':cm}[opts.unit]
|
||||||
page_size = (factor*width, factor*height)
|
page_size = (factor*width, factor*height)
|
||||||
@ -147,9 +147,10 @@ class PDFWriter(QObject):
|
|||||||
opts = self.opts
|
opts = self.opts
|
||||||
page_size = get_page_size(self.opts)
|
page_size = get_page_size(self.opts)
|
||||||
xdpi, ydpi = self.view.logicalDpiX(), self.view.logicalDpiY()
|
xdpi, ydpi = self.view.logicalDpiX(), self.view.logicalDpiY()
|
||||||
|
# We cannot set the side margins in the webview as there is no right
|
||||||
|
# margin for the last page (the margins are implemented with
|
||||||
|
# -webkit-column-gap)
|
||||||
ml, mr = opts.margin_left, opts.margin_right
|
ml, mr = opts.margin_left, opts.margin_right
|
||||||
margin_side = min(ml, mr)
|
|
||||||
ml, mr = ml - margin_side, mr - margin_side
|
|
||||||
self.doc = PdfDevice(out_stream, page_size=page_size, left_margin=ml,
|
self.doc = PdfDevice(out_stream, page_size=page_size, left_margin=ml,
|
||||||
top_margin=0, right_margin=mr, bottom_margin=0,
|
top_margin=0, right_margin=mr, bottom_margin=0,
|
||||||
xdpi=xdpi, ydpi=ydpi, errors=self.log.error,
|
xdpi=xdpi, ydpi=ydpi, errors=self.log.error,
|
||||||
@ -162,9 +163,7 @@ class PDFWriter(QObject):
|
|||||||
self.total_items = len(items)
|
self.total_items = len(items)
|
||||||
|
|
||||||
mt, mb = map(self.doc.to_px, (opts.margin_top, opts.margin_bottom))
|
mt, mb = map(self.doc.to_px, (opts.margin_top, opts.margin_bottom))
|
||||||
ms = self.doc.to_px(margin_side, vertical=False)
|
self.margin_top, self.margin_bottom = map(lambda x:int(floor(x)), (mt, mb))
|
||||||
self.margin_top, self.margin_size, self.margin_bottom = map(
|
|
||||||
lambda x:int(floor(x)), (mt, ms, mb))
|
|
||||||
|
|
||||||
self.painter = QPainter(self.doc)
|
self.painter = QPainter(self.doc)
|
||||||
self.doc.set_metadata(title=pdf_metadata.title,
|
self.doc.set_metadata(title=pdf_metadata.title,
|
||||||
@ -176,6 +175,7 @@ class PDFWriter(QObject):
|
|||||||
p = QPixmap()
|
p = QPixmap()
|
||||||
p.loadFromData(self.cover_data)
|
p.loadFromData(self.cover_data)
|
||||||
if not p.isNull():
|
if not p.isNull():
|
||||||
|
self.doc.init_page()
|
||||||
draw_image_page(QRect(0, 0, self.doc.width(), self.doc.height()),
|
draw_image_page(QRect(0, 0, self.doc.width(), self.doc.height()),
|
||||||
self.painter, p,
|
self.painter, p,
|
||||||
preserve_aspect_ratio=self.opts.preserve_cover_aspect_ratio)
|
preserve_aspect_ratio=self.opts.preserve_cover_aspect_ratio)
|
||||||
@ -184,7 +184,8 @@ class PDFWriter(QObject):
|
|||||||
self.painter.restore()
|
self.painter.restore()
|
||||||
|
|
||||||
QTimer.singleShot(0, self.render_book)
|
QTimer.singleShot(0, self.render_book)
|
||||||
self.loop.exec_()
|
if self.loop.exec_() == 1:
|
||||||
|
raise Exception('PDF Output failed, see log for details')
|
||||||
|
|
||||||
if self.toc is not None and len(self.toc) > 0:
|
if self.toc is not None and len(self.toc) > 0:
|
||||||
self.doc.add_outline(self.toc)
|
self.doc.add_outline(self.toc)
|
||||||
@ -257,7 +258,7 @@ class PDFWriter(QObject):
|
|||||||
paged_display.layout();
|
paged_display.layout();
|
||||||
paged_display.fit_images();
|
paged_display.fit_images();
|
||||||
py_bridge.value = book_indexing.all_links_and_anchors();
|
py_bridge.value = book_indexing.all_links_and_anchors();
|
||||||
'''%(self.margin_top, self.margin_size, self.margin_bottom))
|
'''%(self.margin_top, 0, self.margin_bottom))
|
||||||
|
|
||||||
amap = self.bridge_value
|
amap = self.bridge_value
|
||||||
if not isinstance(amap, dict):
|
if not isinstance(amap, dict):
|
||||||
@ -278,6 +279,7 @@ class PDFWriter(QObject):
|
|||||||
if self.doc.errors_occurred:
|
if self.doc.errors_occurred:
|
||||||
break
|
break
|
||||||
|
|
||||||
self.doc.add_links(self.current_item, start_page, amap['links'],
|
if not self.doc.errors_occurred:
|
||||||
amap['anchors'])
|
self.doc.add_links(self.current_item, start_page, amap['links'],
|
||||||
|
amap['anchors'])
|
||||||
|
|
||||||
|
153
src/calibre/ebooks/pdf/render/gradients.py
Normal file
153
src/calibre/ebooks/pdf/render/gradients.py
Normal file
@ -0,0 +1,153 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import sys, copy
|
||||||
|
from future_builtins import map
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
import sip
|
||||||
|
from PyQt4.Qt import QLinearGradient, QPointF
|
||||||
|
|
||||||
|
from calibre.ebooks.pdf.render.common import Name, Array, Dictionary
|
||||||
|
|
||||||
|
Stop = namedtuple('Stop', 't color')
|
||||||
|
|
||||||
|
class LinearGradientPattern(Dictionary):
|
||||||
|
|
||||||
|
def __init__(self, brush, matrix, pdf, pixel_page_width, pixel_page_height):
|
||||||
|
self.matrix = (matrix.m11(), matrix.m12(), matrix.m21(), matrix.m22(),
|
||||||
|
matrix.dx(), matrix.dy())
|
||||||
|
gradient = sip.cast(brush.gradient(), QLinearGradient)
|
||||||
|
|
||||||
|
start, stop, stops = self.spread_gradient(gradient, pixel_page_width,
|
||||||
|
pixel_page_height, matrix)
|
||||||
|
|
||||||
|
# TODO: Handle colors with different opacities
|
||||||
|
self.const_opacity = stops[0].color[-1]
|
||||||
|
|
||||||
|
funcs = Array()
|
||||||
|
bounds = Array()
|
||||||
|
encode = Array()
|
||||||
|
|
||||||
|
for i, current_stop in enumerate(stops):
|
||||||
|
if i < len(stops) - 1:
|
||||||
|
next_stop = stops[i+1]
|
||||||
|
func = Dictionary({
|
||||||
|
'FunctionType': 2,
|
||||||
|
'Domain': Array([0, 1]),
|
||||||
|
'C0': Array(current_stop.color[:3]),
|
||||||
|
'C1': Array(next_stop.color[:3]),
|
||||||
|
'N': 1,
|
||||||
|
})
|
||||||
|
funcs.append(func)
|
||||||
|
encode.extend((0, 1))
|
||||||
|
if i+1 < len(stops) - 1:
|
||||||
|
bounds.append(next_stop.t)
|
||||||
|
|
||||||
|
func = Dictionary({
|
||||||
|
'FunctionType': 3,
|
||||||
|
'Domain': Array([stops[0].t, stops[-1].t]),
|
||||||
|
'Functions': funcs,
|
||||||
|
'Bounds': bounds,
|
||||||
|
'Encode': encode,
|
||||||
|
})
|
||||||
|
|
||||||
|
shader = Dictionary({
|
||||||
|
'ShadingType': 2,
|
||||||
|
'ColorSpace': Name('DeviceRGB'),
|
||||||
|
'AntiAlias': True,
|
||||||
|
'Coords': Array([start.x(), start.y(), stop.x(), stop.y()]),
|
||||||
|
'Function': func,
|
||||||
|
'Extend': Array([True, True]),
|
||||||
|
})
|
||||||
|
|
||||||
|
Dictionary.__init__(self, {
|
||||||
|
'Type': Name('Pattern'),
|
||||||
|
'PatternType': 2,
|
||||||
|
'Shading': shader,
|
||||||
|
'Matrix': Array(self.matrix),
|
||||||
|
})
|
||||||
|
|
||||||
|
self.cache_key = (self.__class__.__name__, self.matrix,
|
||||||
|
tuple(shader['Coords']), stops)
|
||||||
|
|
||||||
|
def spread_gradient(self, gradient, pixel_page_width, pixel_page_height,
|
||||||
|
matrix):
|
||||||
|
start = gradient.start()
|
||||||
|
stop = gradient.finalStop()
|
||||||
|
stops = list(map(lambda x: [x[0], x[1].getRgbF()], gradient.stops()))
|
||||||
|
spread = gradient.spread()
|
||||||
|
if spread != gradient.PadSpread:
|
||||||
|
inv = matrix.inverted()[0]
|
||||||
|
page_rect = tuple(map(inv.map, (
|
||||||
|
QPointF(0, 0), QPointF(pixel_page_width, 0), QPointF(0, pixel_page_height),
|
||||||
|
QPointF(pixel_page_width, pixel_page_height))))
|
||||||
|
maxx = maxy = -sys.maxint-1
|
||||||
|
minx = miny = sys.maxint
|
||||||
|
|
||||||
|
for p in page_rect:
|
||||||
|
minx, maxx = min(minx, p.x()), max(maxx, p.x())
|
||||||
|
miny, maxy = min(miny, p.y()), max(maxy, p.y())
|
||||||
|
|
||||||
|
def in_page(point):
|
||||||
|
return (minx <= point.x() <= maxx and miny <= point.y() <= maxy)
|
||||||
|
|
||||||
|
offset = stop - start
|
||||||
|
llimit, rlimit = start, stop
|
||||||
|
|
||||||
|
reflect = False
|
||||||
|
base_stops = copy.deepcopy(stops)
|
||||||
|
reversed_stops = list(reversed(stops))
|
||||||
|
do_reflect = spread == gradient.ReflectSpread
|
||||||
|
totl = abs(stops[-1][0] - stops[0][0])
|
||||||
|
intervals = [abs(stops[i+1][0] - stops[i][0])/totl
|
||||||
|
for i in xrange(len(stops)-1)]
|
||||||
|
|
||||||
|
while in_page(llimit):
|
||||||
|
reflect ^= True
|
||||||
|
llimit -= offset
|
||||||
|
estops = reversed_stops if (reflect and do_reflect) else base_stops
|
||||||
|
stops = copy.deepcopy(estops) + stops
|
||||||
|
|
||||||
|
first_is_reflected = reflect
|
||||||
|
reflect = False
|
||||||
|
|
||||||
|
while in_page(rlimit):
|
||||||
|
reflect ^= True
|
||||||
|
rlimit += offset
|
||||||
|
estops = reversed_stops if (reflect and do_reflect) else base_stops
|
||||||
|
stops = stops + copy.deepcopy(estops)
|
||||||
|
|
||||||
|
start, stop = llimit, rlimit
|
||||||
|
|
||||||
|
num = len(stops) // len(base_stops)
|
||||||
|
if num > 1:
|
||||||
|
# Adjust the stop parameter values
|
||||||
|
t = base_stops[0][0]
|
||||||
|
rlen = totl/num
|
||||||
|
reflect = first_is_reflected ^ True
|
||||||
|
intervals = [i*rlen for i in intervals]
|
||||||
|
rintervals = list(reversed(intervals))
|
||||||
|
|
||||||
|
for i in xrange(num):
|
||||||
|
reflect ^= True
|
||||||
|
pos = i * len(base_stops)
|
||||||
|
tvals = [t]
|
||||||
|
for ival in (rintervals if reflect and do_reflect else
|
||||||
|
intervals):
|
||||||
|
tvals.append(tvals[-1] + ival)
|
||||||
|
for j in xrange(len(base_stops)):
|
||||||
|
stops[pos+j][0] = tvals[j]
|
||||||
|
t = tvals[-1]
|
||||||
|
|
||||||
|
# In case there were rounding errors
|
||||||
|
stops[-1][0] = base_stops[-1][0]
|
||||||
|
|
||||||
|
return start, stop, tuple(Stop(s[0], s[1]) for s in stops)
|
||||||
|
|
@ -8,14 +8,17 @@ __copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
|||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from math import sqrt
|
from math import sqrt
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
from PyQt4.Qt import (QBrush, QPen, Qt, QPointF, QTransform, QPainterPath,
|
from PyQt4.Qt import (
|
||||||
QPaintEngine)
|
QBrush, QPen, Qt, QPointF, QTransform, QPaintEngine, QImage)
|
||||||
|
|
||||||
from calibre.ebooks.pdf.render.common import Array
|
from calibre.ebooks.pdf.render.common import (
|
||||||
from calibre.ebooks.pdf.render.serialize import Path, Color
|
Name, Array, fmtnum, Stream, Dictionary)
|
||||||
|
from calibre.ebooks.pdf.render.serialize import Path
|
||||||
|
from calibre.ebooks.pdf.render.gradients import LinearGradientPattern
|
||||||
|
|
||||||
def convert_path(path):
|
def convert_path(path): # {{{
|
||||||
p = Path()
|
p = Path()
|
||||||
i = 0
|
i = 0
|
||||||
while i < path.elementCount():
|
while i < path.elementCount():
|
||||||
@ -38,12 +41,215 @@ def convert_path(path):
|
|||||||
if not added:
|
if not added:
|
||||||
raise ValueError('Invalid curve to operation')
|
raise ValueError('Invalid curve to operation')
|
||||||
return p
|
return p
|
||||||
|
# }}}
|
||||||
|
|
||||||
|
Brush = namedtuple('Brush', 'origin brush color')
|
||||||
|
|
||||||
|
class TilingPattern(Stream):
|
||||||
|
|
||||||
|
def __init__(self, cache_key, matrix, w=8, h=8, paint_type=2, compress=False):
|
||||||
|
Stream.__init__(self, compress=compress)
|
||||||
|
self.paint_type = paint_type
|
||||||
|
self.w, self.h = w, h
|
||||||
|
self.matrix = (matrix.m11(), matrix.m12(), matrix.m21(), matrix.m22(),
|
||||||
|
matrix.dx(), matrix.dy())
|
||||||
|
self.resources = Dictionary()
|
||||||
|
self.cache_key = (self.__class__.__name__, cache_key, self.matrix)
|
||||||
|
|
||||||
|
def add_extra_keys(self, d):
|
||||||
|
d['Type'] = Name('Pattern')
|
||||||
|
d['PatternType'] = 1
|
||||||
|
d['PaintType'] = self.paint_type
|
||||||
|
d['TilingType'] = 1
|
||||||
|
d['BBox'] = Array([0, 0, self.w, self.h])
|
||||||
|
d['XStep'] = self.w
|
||||||
|
d['YStep'] = self.h
|
||||||
|
d['Matrix'] = Array(self.matrix)
|
||||||
|
d['Resources'] = self.resources
|
||||||
|
|
||||||
|
class QtPattern(TilingPattern):
|
||||||
|
|
||||||
|
qt_patterns = ( # {{{
|
||||||
|
"0 J\n"
|
||||||
|
"6 w\n"
|
||||||
|
"[] 0 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # Dense1Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[6 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[] 0 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[6 2] -3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense2Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[6 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] -1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[6 2] -3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense3Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] -1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense4Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 6] 3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense5Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 6] 3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense6Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n", # Dense7Pattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # HorPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # VerPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # CrossPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 5 m\n"
|
||||||
|
"5 -1 l\n"
|
||||||
|
"3 9 m\n"
|
||||||
|
"9 3 l\n"
|
||||||
|
"S\n", # BDiagPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 3 m\n"
|
||||||
|
"5 9 l\n"
|
||||||
|
"3 -1 m\n"
|
||||||
|
"9 5 l\n"
|
||||||
|
"S\n", # FDiagPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 3 m\n"
|
||||||
|
"5 9 l\n"
|
||||||
|
"3 -1 m\n"
|
||||||
|
"9 5 l\n"
|
||||||
|
"-1 5 m\n"
|
||||||
|
"5 -1 l\n"
|
||||||
|
"3 9 m\n"
|
||||||
|
"9 3 l\n"
|
||||||
|
"S\n", # DiagCrossPattern
|
||||||
|
) # }}}
|
||||||
|
|
||||||
|
def __init__(self, pattern_num, matrix):
|
||||||
|
super(QtPattern, self).__init__(pattern_num, matrix)
|
||||||
|
self.write(self.qt_patterns[pattern_num-2])
|
||||||
|
|
||||||
|
class TexturePattern(TilingPattern):
|
||||||
|
|
||||||
|
def __init__(self, pixmap, matrix, pdf, clone=None):
|
||||||
|
if clone is None:
|
||||||
|
image = pixmap.toImage()
|
||||||
|
cache_key = pixmap.cacheKey()
|
||||||
|
imgref = pdf.add_image(image, cache_key)
|
||||||
|
paint_type = (2 if image.format() in {QImage.Format_MonoLSB,
|
||||||
|
QImage.Format_Mono} else 1)
|
||||||
|
super(TexturePattern, self).__init__(
|
||||||
|
cache_key, matrix, w=image.width(), h=image.height(),
|
||||||
|
paint_type=paint_type)
|
||||||
|
m = (self.w, 0, 0, -self.h, 0, self.h)
|
||||||
|
self.resources['XObject'] = Dictionary({'Texture':imgref})
|
||||||
|
self.write_line('%s cm /Texture Do'%(' '.join(map(fmtnum, m))))
|
||||||
|
else:
|
||||||
|
super(TexturePattern, self).__init__(
|
||||||
|
clone.cache_key[1], matrix, w=clone.w, h=clone.h,
|
||||||
|
paint_type=clone.paint_type)
|
||||||
|
self.resources['XObject'] = Dictionary(clone.resources['XObject'])
|
||||||
|
self.write(clone.getvalue())
|
||||||
|
|
||||||
class GraphicsState(object):
|
class GraphicsState(object):
|
||||||
|
|
||||||
FIELDS = ('fill', 'stroke', 'opacity', 'transform', 'brush_origin',
|
FIELDS = ('fill', 'stroke', 'opacity', 'transform', 'brush_origin',
|
||||||
'clip', 'do_fill', 'do_stroke')
|
'clip_updated', 'do_fill', 'do_stroke')
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.fill = QBrush()
|
self.fill = QBrush()
|
||||||
@ -51,9 +257,10 @@ class GraphicsState(object):
|
|||||||
self.opacity = 1.0
|
self.opacity = 1.0
|
||||||
self.transform = QTransform()
|
self.transform = QTransform()
|
||||||
self.brush_origin = QPointF()
|
self.brush_origin = QPointF()
|
||||||
self.clip = QPainterPath()
|
self.clip_updated = False
|
||||||
self.do_fill = False
|
self.do_fill = False
|
||||||
self.do_stroke = True
|
self.do_stroke = True
|
||||||
|
self.qt_pattern_cache = {}
|
||||||
|
|
||||||
def __eq__(self, other):
|
def __eq__(self, other):
|
||||||
for x in self.FIELDS:
|
for x in self.FIELDS:
|
||||||
@ -68,16 +275,20 @@ class GraphicsState(object):
|
|||||||
ans.opacity = self.opacity
|
ans.opacity = self.opacity
|
||||||
ans.transform = self.transform * QTransform()
|
ans.transform = self.transform * QTransform()
|
||||||
ans.brush_origin = QPointF(self.brush_origin)
|
ans.brush_origin = QPointF(self.brush_origin)
|
||||||
ans.clip = self.clip
|
ans.clip_updated = self.clip_updated
|
||||||
ans.do_fill, ans.do_stroke = self.do_fill, self.do_stroke
|
ans.do_fill, ans.do_stroke = self.do_fill, self.do_stroke
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
class Graphics(object):
|
class Graphics(object):
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self, page_width_px, page_height_px):
|
||||||
self.base_state = GraphicsState()
|
self.base_state = GraphicsState()
|
||||||
self.current_state = GraphicsState()
|
self.current_state = GraphicsState()
|
||||||
self.pending_state = None
|
self.pending_state = None
|
||||||
|
self.page_width_px, self.page_height_px = (page_width_px, page_height_px)
|
||||||
|
|
||||||
|
def begin(self, pdf):
|
||||||
|
self.pdf = pdf
|
||||||
|
|
||||||
def update_state(self, state, painter):
|
def update_state(self, state, painter):
|
||||||
flags = state.state()
|
flags = state.state()
|
||||||
@ -102,21 +313,22 @@ class Graphics(object):
|
|||||||
s.opacity = state.opacity()
|
s.opacity = state.opacity()
|
||||||
|
|
||||||
if flags & QPaintEngine.DirtyClipPath or flags & QPaintEngine.DirtyClipRegion:
|
if flags & QPaintEngine.DirtyClipPath or flags & QPaintEngine.DirtyClipRegion:
|
||||||
s.clip = painter.clipPath()
|
s.clip_updated = True
|
||||||
|
|
||||||
def reset(self):
|
def reset(self):
|
||||||
self.current_state = GraphicsState()
|
self.current_state = GraphicsState()
|
||||||
self.pending_state = None
|
self.pending_state = None
|
||||||
|
|
||||||
def __call__(self, pdf, pdf_system, painter):
|
def __call__(self, pdf_system, painter):
|
||||||
# Apply the currently pending state to the PDF
|
# Apply the currently pending state to the PDF
|
||||||
if self.pending_state is None:
|
if self.pending_state is None:
|
||||||
return
|
return
|
||||||
|
|
||||||
pdf_state = self.current_state
|
pdf_state = self.current_state
|
||||||
ps = self.pending_state
|
ps = self.pending_state
|
||||||
|
pdf = self.pdf
|
||||||
|
|
||||||
if (ps.transform != pdf_state.transform or ps.clip != pdf_state.clip):
|
if ps.transform != pdf_state.transform or ps.clip_updated:
|
||||||
pdf.restore_stack()
|
pdf.restore_stack()
|
||||||
pdf.save_stack()
|
pdf.save_stack()
|
||||||
pdf_state = self.base_state
|
pdf_state = self.base_state
|
||||||
@ -125,29 +337,71 @@ class Graphics(object):
|
|||||||
pdf.transform(ps.transform)
|
pdf.transform(ps.transform)
|
||||||
|
|
||||||
if (pdf_state.opacity != ps.opacity or pdf_state.stroke != ps.stroke):
|
if (pdf_state.opacity != ps.opacity or pdf_state.stroke != ps.stroke):
|
||||||
self.apply_stroke(ps, pdf, pdf_system, painter)
|
self.apply_stroke(ps, pdf_system, painter)
|
||||||
|
|
||||||
if (pdf_state.opacity != ps.opacity or pdf_state.fill != ps.fill or
|
if (pdf_state.opacity != ps.opacity or pdf_state.fill != ps.fill or
|
||||||
pdf_state.brush_origin != ps.brush_origin):
|
pdf_state.brush_origin != ps.brush_origin):
|
||||||
self.apply_fill(ps, pdf, pdf_system, painter)
|
self.apply_fill(ps, pdf_system, painter)
|
||||||
|
|
||||||
if (pdf_state.clip != ps.clip):
|
if ps.clip_updated:
|
||||||
p = convert_path(ps.clip)
|
ps.clip_updated = False
|
||||||
fill_rule = {Qt.OddEvenFill:'evenodd',
|
path = painter.clipPath()
|
||||||
Qt.WindingFill:'winding'}[ps.clip.fillRule()]
|
if not path.isEmpty():
|
||||||
pdf.add_clip(p, fill_rule=fill_rule)
|
p = convert_path(path)
|
||||||
|
fill_rule = {Qt.OddEvenFill:'evenodd',
|
||||||
|
Qt.WindingFill:'winding'}[path.fillRule()]
|
||||||
|
pdf.add_clip(p, fill_rule=fill_rule)
|
||||||
|
|
||||||
self.current_state = self.pending_state
|
self.current_state = self.pending_state
|
||||||
self.pending_state = None
|
self.pending_state = None
|
||||||
|
|
||||||
def apply_stroke(self, state, pdf, pdf_system, painter):
|
def convert_brush(self, brush, brush_origin, global_opacity,
|
||||||
# TODO: Handle pens with non solid brushes by setting the colorspace
|
pdf_system, qt_system):
|
||||||
# for stroking to a pattern
|
# Convert a QBrush to PDF operators
|
||||||
|
style = brush.style()
|
||||||
|
pdf = self.pdf
|
||||||
|
|
||||||
|
pattern = color = pat = None
|
||||||
|
opacity = global_opacity
|
||||||
|
do_fill = True
|
||||||
|
|
||||||
|
matrix = (QTransform.fromTranslate(brush_origin.x(), brush_origin.y())
|
||||||
|
* pdf_system * qt_system.inverted()[0])
|
||||||
|
vals = list(brush.color().getRgbF())
|
||||||
|
self.brushobj = None
|
||||||
|
|
||||||
|
if style <= Qt.DiagCrossPattern:
|
||||||
|
opacity *= vals[-1]
|
||||||
|
color = vals[:3]
|
||||||
|
|
||||||
|
if style > Qt.SolidPattern:
|
||||||
|
pat = QtPattern(style, matrix)
|
||||||
|
|
||||||
|
elif style == Qt.TexturePattern:
|
||||||
|
pat = TexturePattern(brush.texture(), matrix, pdf)
|
||||||
|
if pat.paint_type == 2:
|
||||||
|
opacity *= vals[-1]
|
||||||
|
color = vals[:3]
|
||||||
|
|
||||||
|
elif style == Qt.LinearGradientPattern:
|
||||||
|
pat = LinearGradientPattern(brush, matrix, pdf, self.page_width_px,
|
||||||
|
self.page_height_px)
|
||||||
|
opacity *= pat.const_opacity
|
||||||
|
# TODO: Add support for radial/conical gradient fills
|
||||||
|
|
||||||
|
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||||
|
do_fill = False
|
||||||
|
self.brushobj = Brush(brush_origin, pat, color)
|
||||||
|
|
||||||
|
if pat is not None:
|
||||||
|
pattern = pdf.add_pattern(pat)
|
||||||
|
return color, opacity, pattern, do_fill
|
||||||
|
|
||||||
|
def apply_stroke(self, state, pdf_system, painter):
|
||||||
# TODO: Support miter limit by using QPainterPathStroker
|
# TODO: Support miter limit by using QPainterPathStroker
|
||||||
pen = state.stroke
|
pen = state.stroke
|
||||||
self.pending_state.do_stroke = True
|
self.pending_state.do_stroke = True
|
||||||
if pen.style() == Qt.NoPen:
|
pdf = self.pdf
|
||||||
self.pending_state.do_stroke = False
|
|
||||||
|
|
||||||
# Width
|
# Width
|
||||||
w = pen.widthF()
|
w = pen.widthF()
|
||||||
@ -172,25 +426,54 @@ class Graphics(object):
|
|||||||
Qt.DashDotDotLine:[3, 2, 1, 2, 1, 2]}.get(pen.style(), [])
|
Qt.DashDotDotLine:[3, 2, 1, 2, 1, 2]}.get(pen.style(), [])
|
||||||
if ps:
|
if ps:
|
||||||
pdf.serialize(Array(ps))
|
pdf.serialize(Array(ps))
|
||||||
pdf.current_page.write(' d ')
|
pdf.current_page.write(' 0 d ')
|
||||||
|
|
||||||
# Stroke fill
|
# Stroke fill
|
||||||
b = pen.brush()
|
color, opacity, pattern, self.pending_state.do_stroke = self.convert_brush(
|
||||||
vals = list(b.color().getRgbF())
|
pen.brush(), state.brush_origin, state.opacity, pdf_system,
|
||||||
vals[-1] *= state.opacity
|
painter.transform())
|
||||||
color = Color(*vals)
|
self.pdf.apply_stroke(color, pattern, opacity)
|
||||||
pdf.set_stroke_color(color)
|
if pen.style() == Qt.NoPen:
|
||||||
|
|
||||||
if vals[-1] < 1e-5 or b.style() == Qt.NoBrush:
|
|
||||||
self.pending_state.do_stroke = False
|
self.pending_state.do_stroke = False
|
||||||
|
|
||||||
def apply_fill(self, state, pdf, pdf_system, painter):
|
def apply_fill(self, state, pdf_system, painter):
|
||||||
self.pending_state.do_fill = True
|
self.pending_state.do_fill = True
|
||||||
b = state.fill
|
color, opacity, pattern, self.pending_state.do_fill = self.convert_brush(
|
||||||
if b.style() == Qt.NoBrush:
|
state.fill, state.brush_origin, state.opacity, pdf_system,
|
||||||
self.pending_state.do_fill = False
|
painter.transform())
|
||||||
vals = list(b.color().getRgbF())
|
self.pdf.apply_fill(color, pattern, opacity)
|
||||||
vals[-1] *= state.opacity
|
self.last_fill = self.brushobj
|
||||||
color = Color(*vals)
|
|
||||||
pdf.set_fill_color(color)
|
def __enter__(self):
|
||||||
|
self.pdf.save_stack()
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
self.pdf.restore_stack()
|
||||||
|
|
||||||
|
def resolve_fill(self, rect, pdf_system, qt_system):
|
||||||
|
'''
|
||||||
|
Qt's paint system does not update brushOrigin when using
|
||||||
|
TexturePatterns and it also uses TexturePatterns to emulate gradients,
|
||||||
|
leading to brokenness. So this method allows the paint engine to update
|
||||||
|
the brush origin before painting an object. While not perfect, this is
|
||||||
|
better than nothing. The problem is that if the rect being filled has a
|
||||||
|
border, then QtWebKit generates an image of the rect size - border but
|
||||||
|
fills the full rect, and there's no way for the paint engine to know
|
||||||
|
that and adjust the brush origin.
|
||||||
|
'''
|
||||||
|
if not hasattr(self, 'last_fill') or not self.current_state.do_fill:
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(self.last_fill.brush, TexturePattern):
|
||||||
|
tl = rect.topLeft()
|
||||||
|
if tl == self.last_fill.origin:
|
||||||
|
return
|
||||||
|
|
||||||
|
matrix = (QTransform.fromTranslate(tl.x(), tl.y())
|
||||||
|
* pdf_system * qt_system.inverted()[0])
|
||||||
|
|
||||||
|
pat = TexturePattern(None, matrix, self.pdf, clone=self.last_fill.brush)
|
||||||
|
pattern = self.pdf.add_pattern(pat)
|
||||||
|
self.pdf.apply_fill(self.last_fill.color, pattern)
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,10 +17,14 @@ from calibre.ebooks.pdf.render.common import Array, Name, Dictionary, String
|
|||||||
class Destination(Array):
|
class Destination(Array):
|
||||||
|
|
||||||
def __init__(self, start_page, pos, get_pageref):
|
def __init__(self, start_page, pos, get_pageref):
|
||||||
super(Destination, self).__init__(
|
pnum = start_page + pos['column']
|
||||||
[get_pageref(start_page + pos['column']), Name('XYZ'), pos['left'],
|
try:
|
||||||
pos['top'], None]
|
pref = get_pageref(pnum)
|
||||||
)
|
except IndexError:
|
||||||
|
pref = get_pageref(pnum-1)
|
||||||
|
super(Destination, self).__init__([
|
||||||
|
pref, Name('XYZ'), pos['left'], pos['top'], None
|
||||||
|
])
|
||||||
|
|
||||||
class Links(object):
|
class Links(object):
|
||||||
|
|
||||||
@ -58,7 +62,13 @@ class Links(object):
|
|||||||
0])})
|
0])})
|
||||||
if is_local:
|
if is_local:
|
||||||
path = combined_path if href else path
|
path = combined_path if href else path
|
||||||
annot['Dest'] = self.anchors[path][frag]
|
try:
|
||||||
|
annot['Dest'] = self.anchors[path][frag]
|
||||||
|
except KeyError:
|
||||||
|
try:
|
||||||
|
annot['Dest'] = self.anchors[path][None]
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
else:
|
else:
|
||||||
url = href + (('#'+frag) if frag else '')
|
url = href + (('#'+frag) if frag else '')
|
||||||
purl = urlparse(url)
|
purl = urlparse(url)
|
||||||
|
@ -17,18 +17,25 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
|||||||
QFontEngine *fe = ti.fontEngine;
|
QFontEngine *fe = ti.fontEngine;
|
||||||
qreal size = ti.fontEngine->fontDef.pixelSize;
|
qreal size = ti.fontEngine->fontDef.pixelSize;
|
||||||
#ifdef Q_WS_WIN
|
#ifdef Q_WS_WIN
|
||||||
if (ti.fontEngine->type() == QFontEngine::Win) {
|
if (false && ti.fontEngine->type() == QFontEngine::Win) {
|
||||||
|
// This is used in the Qt sourcecode, but it gives incorrect results,
|
||||||
|
// so I have disabled it. I dont understand how it works in qpdf.cpp
|
||||||
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
|
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
|
||||||
|
// I think this should be tmHeight - tmInternalLeading, but pixelSize
|
||||||
|
// seems to work on windows as well, so leave it as pixelSize
|
||||||
size = fe->tm.tmHeight;
|
size = fe->tm.tmHeight;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
int synthesized = ti.fontEngine->synthesized();
|
||||||
|
qreal stretch = synthesized & QFontEngine::SynthesizedStretch ? ti.fontEngine->fontDef.stretch/100. : 1.;
|
||||||
|
|
||||||
QVarLengthArray<glyph_t> glyphs;
|
QVarLengthArray<glyph_t> glyphs;
|
||||||
QVarLengthArray<QFixedPoint> positions;
|
QVarLengthArray<QFixedPoint> positions;
|
||||||
QTransform m = QTransform::fromTranslate(p.x(), p.y());
|
QTransform m = QTransform::fromTranslate(p.x(), p.y());
|
||||||
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
|
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
|
||||||
QVector<QPointF> points = QVector<QPointF>(positions.count());
|
QVector<QPointF> points = QVector<QPointF>(positions.count());
|
||||||
for (int i = 0; i < positions.count(); i++) {
|
for (int i = 0; i < positions.count(); i++) {
|
||||||
points[i].setX(positions[i].x.toReal());
|
points[i].setX(positions[i].x.toReal()/stretch);
|
||||||
points[i].setY(positions[i].y.toReal());
|
points[i].setY(positions[i].y.toReal());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -38,10 +45,10 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
|||||||
|
|
||||||
const quint32 *tag = reinterpret_cast<const quint32 *>("name");
|
const quint32 *tag = reinterpret_cast<const quint32 *>("name");
|
||||||
|
|
||||||
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, points, indices);
|
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, stretch, points, indices);
|
||||||
}
|
}
|
||||||
|
|
||||||
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), indices(indices) {
|
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), stretch(stretch), indices(indices) {
|
||||||
}
|
}
|
||||||
|
|
||||||
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {
|
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {
|
||||||
|
@ -17,9 +17,10 @@ class GlyphInfo {
|
|||||||
QByteArray name;
|
QByteArray name;
|
||||||
QVector<QPointF> positions;
|
QVector<QPointF> positions;
|
||||||
qreal size;
|
qreal size;
|
||||||
|
qreal stretch;
|
||||||
QVector<unsigned int> indices;
|
QVector<unsigned int> indices;
|
||||||
|
|
||||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
GlyphInfo(const GlyphInfo&);
|
GlyphInfo(const GlyphInfo&);
|
||||||
|
@ -13,9 +13,10 @@ class GlyphInfo {
|
|||||||
public:
|
public:
|
||||||
QByteArray name;
|
QByteArray name;
|
||||||
qreal size;
|
qreal size;
|
||||||
|
qreal stretch;
|
||||||
QVector<QPointF> &positions;
|
QVector<QPointF> &positions;
|
||||||
QVector<unsigned int> indices;
|
QVector<unsigned int> indices;
|
||||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||||
private:
|
private:
|
||||||
GlyphInfo(const GlyphInfo& g);
|
GlyphInfo(const GlyphInfo& g);
|
||||||
|
|
||||||
|
@ -9,20 +9,19 @@ __docformat__ = 'restructuredtext en'
|
|||||||
|
|
||||||
import hashlib
|
import hashlib
|
||||||
from future_builtins import map
|
from future_builtins import map
|
||||||
from itertools import izip
|
|
||||||
from collections import namedtuple
|
from PyQt4.Qt import QBuffer, QByteArray, QImage, Qt, QColor, qRgba
|
||||||
|
|
||||||
from calibre.constants import (__appname__, __version__)
|
from calibre.constants import (__appname__, __version__)
|
||||||
from calibre.ebooks.pdf.render.common import (
|
from calibre.ebooks.pdf.render.common import (
|
||||||
Reference, EOL, serialize, Stream, Dictionary, String, Name, Array,
|
Reference, EOL, serialize, Stream, Dictionary, String, Name, Array,
|
||||||
GlyphIndex, fmtnum)
|
fmtnum)
|
||||||
from calibre.ebooks.pdf.render.fonts import FontManager
|
from calibre.ebooks.pdf.render.fonts import FontManager
|
||||||
from calibre.ebooks.pdf.render.links import Links
|
from calibre.ebooks.pdf.render.links import Links
|
||||||
|
from calibre.utils.date import utcnow
|
||||||
|
|
||||||
PDFVER = b'%PDF-1.3'
|
PDFVER = b'%PDF-1.3'
|
||||||
|
|
||||||
Color = namedtuple('Color', 'red green blue opacity')
|
|
||||||
|
|
||||||
class IndirectObjects(object):
|
class IndirectObjects(object):
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
@ -90,6 +89,7 @@ class Page(Stream):
|
|||||||
self.opacities = {}
|
self.opacities = {}
|
||||||
self.fonts = {}
|
self.fonts = {}
|
||||||
self.xobjects = {}
|
self.xobjects = {}
|
||||||
|
self.patterns = {}
|
||||||
|
|
||||||
def set_opacity(self, opref):
|
def set_opacity(self, opref):
|
||||||
if opref not in self.opacities:
|
if opref not in self.opacities:
|
||||||
@ -108,6 +108,11 @@ class Page(Stream):
|
|||||||
self.xobjects[imgref] = 'Image%d'%len(self.xobjects)
|
self.xobjects[imgref] = 'Image%d'%len(self.xobjects)
|
||||||
return self.xobjects[imgref]
|
return self.xobjects[imgref]
|
||||||
|
|
||||||
|
def add_pattern(self, patternref):
|
||||||
|
if patternref not in self.patterns:
|
||||||
|
self.patterns[patternref] = 'Pat%d'%len(self.patterns)
|
||||||
|
return self.patterns[patternref]
|
||||||
|
|
||||||
def add_resources(self):
|
def add_resources(self):
|
||||||
r = Dictionary()
|
r = Dictionary()
|
||||||
if self.opacities:
|
if self.opacities:
|
||||||
@ -125,6 +130,13 @@ class Page(Stream):
|
|||||||
for ref, name in self.xobjects.iteritems():
|
for ref, name in self.xobjects.iteritems():
|
||||||
xobjects[name] = ref
|
xobjects[name] = ref
|
||||||
r['XObject'] = xobjects
|
r['XObject'] = xobjects
|
||||||
|
if self.patterns:
|
||||||
|
r['ColorSpace'] = Dictionary({'PCSp':Array(
|
||||||
|
[Name('Pattern'), Name('DeviceRGB')])})
|
||||||
|
patterns = Dictionary()
|
||||||
|
for ref, name in self.patterns.iteritems():
|
||||||
|
patterns[name] = ref
|
||||||
|
r['Pattern'] = patterns
|
||||||
if r:
|
if r:
|
||||||
self.page_dict['Resources'] = r
|
self.page_dict['Resources'] = r
|
||||||
|
|
||||||
@ -154,54 +166,6 @@ class Path(object):
|
|||||||
def close(self):
|
def close(self):
|
||||||
self.ops.append(('h',))
|
self.ops.append(('h',))
|
||||||
|
|
||||||
class Text(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.transform = self.default_transform = [1, 0, 0, 1, 0, 0]
|
|
||||||
self.font_name = 'Times-Roman'
|
|
||||||
self.font_path = None
|
|
||||||
self.horizontal_scale = self.default_horizontal_scale = 100
|
|
||||||
self.word_spacing = self.default_word_spacing = 0
|
|
||||||
self.char_space = self.default_char_space = 0
|
|
||||||
self.glyph_adjust = self.default_glyph_adjust = None
|
|
||||||
self.size = 12
|
|
||||||
self.text = ''
|
|
||||||
|
|
||||||
def set_transform(self, *args):
|
|
||||||
if len(args) == 1:
|
|
||||||
m = args[0]
|
|
||||||
vals = [m.m11(), m.m12(), m.m21(), m.m22(), m.dx(), m.dy()]
|
|
||||||
else:
|
|
||||||
vals = args
|
|
||||||
self.transform = vals
|
|
||||||
|
|
||||||
def pdf_serialize(self, stream, font_name):
|
|
||||||
if not self.text: return
|
|
||||||
stream.write_line('BT ')
|
|
||||||
serialize(Name(font_name), stream)
|
|
||||||
stream.write(' %s Tf '%fmtnum(self.size))
|
|
||||||
stream.write(' '.join(map(fmtnum, self.transform)) + ' Tm ')
|
|
||||||
if self.horizontal_scale != self.default_horizontal_scale:
|
|
||||||
stream.write('%s Tz '%fmtnum(self.horizontal_scale))
|
|
||||||
if self.word_spacing != self.default_word_spacing:
|
|
||||||
stream.write('%s Tw '%fmtnum(self.word_spacing))
|
|
||||||
if self.char_space != self.default_char_space:
|
|
||||||
stream.write('%s Tc '%fmtnum(self.char_space))
|
|
||||||
stream.write_line()
|
|
||||||
if self.glyph_adjust is self.default_glyph_adjust:
|
|
||||||
serialize(String(self.text), stream)
|
|
||||||
stream.write(' Tj ')
|
|
||||||
else:
|
|
||||||
chars = Array()
|
|
||||||
frac, widths = self.glyph_adjust
|
|
||||||
for c, width in izip(self.text, widths):
|
|
||||||
chars.append(String(c))
|
|
||||||
chars.append(int(width * frac))
|
|
||||||
serialize(chars, stream)
|
|
||||||
stream.write(' TJ ')
|
|
||||||
stream.write_line('ET')
|
|
||||||
|
|
||||||
|
|
||||||
class Catalog(Dictionary):
|
class Catalog(Dictionary):
|
||||||
|
|
||||||
def __init__(self, pagetree):
|
def __init__(self, pagetree):
|
||||||
@ -232,7 +196,9 @@ class HashingStream(object):
|
|||||||
self.last_char = b''
|
self.last_char = b''
|
||||||
|
|
||||||
def write(self, raw):
|
def write(self, raw):
|
||||||
raw = raw if isinstance(raw, bytes) else raw.encode('ascii')
|
self.write_raw(raw if isinstance(raw, bytes) else raw.encode('ascii'))
|
||||||
|
|
||||||
|
def write_raw(self, raw):
|
||||||
self.f.write(raw)
|
self.f.write(raw)
|
||||||
self.hashobj.update(raw)
|
self.hashobj.update(raw)
|
||||||
if raw:
|
if raw:
|
||||||
@ -294,13 +260,20 @@ class PDFStream(object):
|
|||||||
self.objects.add(PageTree(page_size))
|
self.objects.add(PageTree(page_size))
|
||||||
self.objects.add(Catalog(self.page_tree))
|
self.objects.add(Catalog(self.page_tree))
|
||||||
self.current_page = Page(self.page_tree, compress=self.compress)
|
self.current_page = Page(self.page_tree, compress=self.compress)
|
||||||
self.info = Dictionary({'Creator':String(creator),
|
self.info = Dictionary({
|
||||||
'Producer':String(creator)})
|
'Creator':String(creator),
|
||||||
|
'Producer':String(creator),
|
||||||
|
'CreationDate': utcnow(),
|
||||||
|
})
|
||||||
self.stroke_opacities, self.fill_opacities = {}, {}
|
self.stroke_opacities, self.fill_opacities = {}, {}
|
||||||
self.font_manager = FontManager(self.objects, self.compress)
|
self.font_manager = FontManager(self.objects, self.compress)
|
||||||
self.image_cache = {}
|
self.image_cache = {}
|
||||||
|
self.pattern_cache, self.shader_cache = {}, {}
|
||||||
self.debug = debug
|
self.debug = debug
|
||||||
self.links = Links(self, mark_links, page_size)
|
self.links = Links(self, mark_links, page_size)
|
||||||
|
i = QImage(1, 1, QImage.Format_ARGB32)
|
||||||
|
i.fill(qRgba(0, 0, 0, 255))
|
||||||
|
self.alpha_bit = i.constBits().asstring(4).find(b'\xff')
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def page_tree(self):
|
def page_tree(self):
|
||||||
@ -334,9 +307,6 @@ class PDFStream(object):
|
|||||||
cm = ' '.join(map(fmtnum, vals))
|
cm = ' '.join(map(fmtnum, vals))
|
||||||
self.current_page.write_line(cm + ' cm')
|
self.current_page.write_line(cm + ' cm')
|
||||||
|
|
||||||
def set_rgb_colorspace(self):
|
|
||||||
self.current_page.write_line('/DeviceRGB CS /DeviceRGB cs')
|
|
||||||
|
|
||||||
def save_stack(self):
|
def save_stack(self):
|
||||||
self.current_page.write_line('q')
|
self.current_page.write_line('q')
|
||||||
|
|
||||||
@ -372,35 +342,24 @@ class PDFStream(object):
|
|||||||
def serialize(self, o):
|
def serialize(self, o):
|
||||||
serialize(o, self.current_page)
|
serialize(o, self.current_page)
|
||||||
|
|
||||||
def set_stroke_color(self, color):
|
def set_stroke_opacity(self, opacity):
|
||||||
opacity = color.opacity
|
|
||||||
if opacity not in self.stroke_opacities:
|
if opacity not in self.stroke_opacities:
|
||||||
op = Dictionary({'Type':Name('ExtGState'), 'CA': opacity})
|
op = Dictionary({'Type':Name('ExtGState'), 'CA': opacity})
|
||||||
self.stroke_opacities[opacity] = self.objects.add(op)
|
self.stroke_opacities[opacity] = self.objects.add(op)
|
||||||
self.current_page.set_opacity(self.stroke_opacities[opacity])
|
self.current_page.set_opacity(self.stroke_opacities[opacity])
|
||||||
self.current_page.write_line(' '.join(map(fmtnum, color[:3])) + ' SC')
|
|
||||||
|
|
||||||
def set_fill_color(self, color):
|
def set_fill_opacity(self, opacity):
|
||||||
opacity = color.opacity
|
opacity = float(opacity)
|
||||||
if opacity not in self.fill_opacities:
|
if opacity not in self.fill_opacities:
|
||||||
op = Dictionary({'Type':Name('ExtGState'), 'ca': opacity})
|
op = Dictionary({'Type':Name('ExtGState'), 'ca': opacity})
|
||||||
self.fill_opacities[opacity] = self.objects.add(op)
|
self.fill_opacities[opacity] = self.objects.add(op)
|
||||||
self.current_page.set_opacity(self.fill_opacities[opacity])
|
self.current_page.set_opacity(self.fill_opacities[opacity])
|
||||||
self.current_page.write_line(' '.join(map(fmtnum, color[:3])) + ' sc')
|
|
||||||
|
|
||||||
def end_page(self):
|
def end_page(self):
|
||||||
pageref = self.current_page.end(self.objects, self.stream)
|
pageref = self.current_page.end(self.objects, self.stream)
|
||||||
self.page_tree.obj.add_page(pageref)
|
self.page_tree.obj.add_page(pageref)
|
||||||
self.current_page = Page(self.page_tree, compress=self.compress)
|
self.current_page = Page(self.page_tree, compress=self.compress)
|
||||||
|
|
||||||
def draw_text(self, text_object):
|
|
||||||
if text_object.font_path is None:
|
|
||||||
fontref = self.font_manager.add_standard_font(text_object.font_name)
|
|
||||||
else:
|
|
||||||
raise NotImplementedError()
|
|
||||||
name = self.current_page.add_font(fontref)
|
|
||||||
text_object.pdf_serialize(self.current_page, name)
|
|
||||||
|
|
||||||
def draw_glyph_run(self, transform, size, font_metrics, glyphs):
|
def draw_glyph_run(self, transform, size, font_metrics, glyphs):
|
||||||
glyph_ids = {x[-1] for x in glyphs}
|
glyph_ids = {x[-1] for x in glyphs}
|
||||||
fontref = self.font_manager.add_font(font_metrics, glyph_ids)
|
fontref = self.font_manager.add_font(font_metrics, glyph_ids)
|
||||||
@ -410,9 +369,8 @@ class PDFStream(object):
|
|||||||
self.current_page.write(' %s Tf '%fmtnum(size))
|
self.current_page.write(' %s Tf '%fmtnum(size))
|
||||||
self.current_page.write('%s Tm '%' '.join(map(fmtnum, transform)))
|
self.current_page.write('%s Tm '%' '.join(map(fmtnum, transform)))
|
||||||
for x, y, glyph_id in glyphs:
|
for x, y, glyph_id in glyphs:
|
||||||
self.current_page.write('%s %s Td '%(fmtnum(x), fmtnum(y)))
|
self.current_page.write_raw(('%s %s Td <%04X> Tj '%(
|
||||||
serialize(GlyphIndex(glyph_id), self.current_page)
|
fmtnum(x), fmtnum(y), glyph_id)).encode('ascii'))
|
||||||
self.current_page.write(' Tj ')
|
|
||||||
self.current_page.write_line(b' ET')
|
self.current_page.write_line(b' ET')
|
||||||
|
|
||||||
def get_image(self, cache_key):
|
def get_image(self, cache_key):
|
||||||
@ -425,13 +383,108 @@ class PDFStream(object):
|
|||||||
self.objects.commit(r, self.stream)
|
self.objects.commit(r, self.stream)
|
||||||
return r
|
return r
|
||||||
|
|
||||||
def draw_image(self, x, y, xscale, yscale, imgref):
|
def add_image(self, img, cache_key):
|
||||||
|
ref = self.get_image(cache_key)
|
||||||
|
if ref is not None:
|
||||||
|
return ref
|
||||||
|
|
||||||
|
fmt = img.format()
|
||||||
|
image = QImage(img)
|
||||||
|
if (image.depth() == 1 and img.colorTable().size() == 2 and
|
||||||
|
img.colorTable().at(0) == QColor(Qt.black).rgba() and
|
||||||
|
img.colorTable().at(1) == QColor(Qt.white).rgba()):
|
||||||
|
if fmt == QImage.Format_MonoLSB:
|
||||||
|
image = image.convertToFormat(QImage.Format_Mono)
|
||||||
|
fmt = QImage.Format_Mono
|
||||||
|
else:
|
||||||
|
if (fmt != QImage.Format_RGB32 and fmt != QImage.Format_ARGB32):
|
||||||
|
image = image.convertToFormat(QImage.Format_ARGB32)
|
||||||
|
fmt = QImage.Format_ARGB32
|
||||||
|
|
||||||
|
w = image.width()
|
||||||
|
h = image.height()
|
||||||
|
d = image.depth()
|
||||||
|
|
||||||
|
if fmt == QImage.Format_Mono:
|
||||||
|
bytes_per_line = (w + 7) >> 3
|
||||||
|
data = image.constBits().asstring(bytes_per_line * h)
|
||||||
|
return self.write_image(data, w, h, d, cache_key=cache_key)
|
||||||
|
|
||||||
|
ba = QByteArray()
|
||||||
|
buf = QBuffer(ba)
|
||||||
|
image.save(buf, 'jpeg', 94)
|
||||||
|
data = bytes(ba.data())
|
||||||
|
has_alpha = has_mask = False
|
||||||
|
soft_mask = mask = None
|
||||||
|
|
||||||
|
if fmt == QImage.Format_ARGB32:
|
||||||
|
tmask = image.constBits().asstring(4*w*h)[self.alpha_bit::4]
|
||||||
|
sdata = bytearray(tmask)
|
||||||
|
vals = set(sdata)
|
||||||
|
vals.discard(255)
|
||||||
|
has_mask = bool(vals)
|
||||||
|
vals.discard(0)
|
||||||
|
has_alpha = bool(vals)
|
||||||
|
|
||||||
|
if has_alpha:
|
||||||
|
soft_mask = self.write_image(tmask, w, h, 8)
|
||||||
|
elif has_mask:
|
||||||
|
# dither the soft mask to 1bit and add it. This also helps PDF
|
||||||
|
# viewers without transparency support
|
||||||
|
bytes_per_line = (w + 7) >> 3
|
||||||
|
mdata = bytearray(0 for i in xrange(bytes_per_line * h))
|
||||||
|
spos = mpos = 0
|
||||||
|
for y in xrange(h):
|
||||||
|
for x in xrange(w):
|
||||||
|
if sdata[spos]:
|
||||||
|
mdata[mpos + x>>3] |= (0x80 >> (x&7))
|
||||||
|
spos += 1
|
||||||
|
mpos += bytes_per_line
|
||||||
|
mdata = bytes(mdata)
|
||||||
|
mask = self.write_image(mdata, w, h, 1)
|
||||||
|
|
||||||
|
return self.write_image(data, w, h, 32, mask=mask, dct=True,
|
||||||
|
soft_mask=soft_mask, cache_key=cache_key)
|
||||||
|
|
||||||
|
def add_pattern(self, pattern):
|
||||||
|
if pattern.cache_key not in self.pattern_cache:
|
||||||
|
self.pattern_cache[pattern.cache_key] = self.objects.add(pattern)
|
||||||
|
return self.current_page.add_pattern(self.pattern_cache[pattern.cache_key])
|
||||||
|
|
||||||
|
def add_shader(self, shader):
|
||||||
|
if shader.cache_key not in self.shader_cache:
|
||||||
|
self.shader_cache[shader.cache_key] = self.objects.add(shader)
|
||||||
|
return self.shader_cache[shader.cache_key]
|
||||||
|
|
||||||
|
def draw_image(self, x, y, width, height, imgref):
|
||||||
name = self.current_page.add_image(imgref)
|
name = self.current_page.add_image(imgref)
|
||||||
self.current_page.write('q %s 0 0 %s %s %s cm '%(fmtnum(xscale),
|
self.current_page.write('q %s 0 0 %s %s %s cm '%(fmtnum(width),
|
||||||
fmtnum(yscale), fmtnum(x), fmtnum(y)))
|
fmtnum(-height), fmtnum(x), fmtnum(y+height)))
|
||||||
serialize(Name(name), self.current_page)
|
serialize(Name(name), self.current_page)
|
||||||
self.current_page.write_line(' Do Q')
|
self.current_page.write_line(' Do Q')
|
||||||
|
|
||||||
|
def apply_color_space(self, color, pattern, stroke=False):
|
||||||
|
wl = self.current_page.write_line
|
||||||
|
if color is not None and pattern is None:
|
||||||
|
wl(' '.join(map(fmtnum, color)) + (' RG' if stroke else ' rg'))
|
||||||
|
elif color is None and pattern is not None:
|
||||||
|
wl('/Pattern %s /%s %s'%('CS' if stroke else 'cs', pattern,
|
||||||
|
'SCN' if stroke else 'scn'))
|
||||||
|
elif color is not None and pattern is not None:
|
||||||
|
col = ' '.join(map(fmtnum, color))
|
||||||
|
wl('/PCSp %s %s /%s %s'%('CS' if stroke else 'cs', col, pattern,
|
||||||
|
'SCN' if stroke else 'scn'))
|
||||||
|
|
||||||
|
def apply_fill(self, color=None, pattern=None, opacity=None):
|
||||||
|
if opacity is not None:
|
||||||
|
self.set_fill_opacity(opacity)
|
||||||
|
self.apply_color_space(color, pattern)
|
||||||
|
|
||||||
|
def apply_stroke(self, color=None, pattern=None, opacity=None):
|
||||||
|
if opacity is not None:
|
||||||
|
self.set_stroke_opacity(opacity)
|
||||||
|
self.apply_color_space(color, pattern, stroke=True)
|
||||||
|
|
||||||
def end(self):
|
def end(self):
|
||||||
if self.current_page.getvalue():
|
if self.current_page.getvalue():
|
||||||
self.end_page()
|
self.end_page()
|
||||||
|
135
src/calibre/ebooks/pdf/render/test.py
Normal file
135
src/calibre/ebooks/pdf/render/test.py
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF,
|
||||||
|
QApplication, QPainter, Qt, QImage, QLinearGradient,
|
||||||
|
QPointF, QPen)
|
||||||
|
QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF, Qt, QPointF
|
||||||
|
|
||||||
|
from calibre.ebooks.pdf.render.engine import PdfDevice
|
||||||
|
|
||||||
|
def full(p, xmax, ymax):
|
||||||
|
p.drawRect(0, 0, xmax, ymax)
|
||||||
|
p.drawPolyline(QPoint(0, 0), QPoint(xmax, 0), QPoint(xmax, ymax),
|
||||||
|
QPoint(0, ymax), QPoint(0, 0))
|
||||||
|
pp = QPainterPath()
|
||||||
|
pp.addRect(0, 0, xmax, ymax)
|
||||||
|
p.drawPath(pp)
|
||||||
|
p.save()
|
||||||
|
for i in xrange(3):
|
||||||
|
col = [0, 0, 0, 200]
|
||||||
|
col[i] = 255
|
||||||
|
p.setOpacity(0.3)
|
||||||
|
p.fillRect(0, 0, xmax/10, xmax/10, QBrush(QColor(*col)))
|
||||||
|
p.setOpacity(1)
|
||||||
|
p.drawRect(0, 0, xmax/10, xmax/10)
|
||||||
|
p.translate(xmax/10, xmax/10)
|
||||||
|
p.scale(1, 1.5)
|
||||||
|
p.restore()
|
||||||
|
|
||||||
|
# p.scale(2, 2)
|
||||||
|
# p.rotate(45)
|
||||||
|
p.drawPixmap(0, 0, xmax/4, xmax/4, QPixmap(I('library.png')))
|
||||||
|
p.drawRect(0, 0, xmax/4, xmax/4)
|
||||||
|
|
||||||
|
f = p.font()
|
||||||
|
f.setPointSize(20)
|
||||||
|
# f.setLetterSpacing(f.PercentageSpacing, 200)
|
||||||
|
f.setUnderline(True)
|
||||||
|
# f.setOverline(True)
|
||||||
|
# f.setStrikeOut(True)
|
||||||
|
f.setFamily('Calibri')
|
||||||
|
p.setFont(f)
|
||||||
|
# p.setPen(QColor(0, 0, 255))
|
||||||
|
# p.scale(2, 2)
|
||||||
|
# p.rotate(45)
|
||||||
|
p.drawText(QPoint(xmax/3.9, 30), 'Some—text not By’s ū --- Д AV ff ff')
|
||||||
|
|
||||||
|
b = QBrush(Qt.HorPattern)
|
||||||
|
b.setColor(QColor(Qt.blue))
|
||||||
|
pix = QPixmap(I('console.png'))
|
||||||
|
w = xmax/4
|
||||||
|
p.fillRect(0, ymax/3, w, w, b)
|
||||||
|
p.fillRect(xmax/3, ymax/3, w, w, QBrush(pix))
|
||||||
|
x, y = 2*xmax/3, ymax/3
|
||||||
|
p.drawTiledPixmap(QRectF(x, y, w, w), pix, QPointF(10, 10))
|
||||||
|
|
||||||
|
x, y = 1, ymax/1.9
|
||||||
|
g = QLinearGradient(QPointF(x, y), QPointF(x+w, y+w))
|
||||||
|
g.setColorAt(0, QColor('#00f'))
|
||||||
|
g.setColorAt(1, QColor('#fff'))
|
||||||
|
p.fillRect(x, y, w, w, QBrush(g))
|
||||||
|
|
||||||
|
|
||||||
|
def run(dev, func):
|
||||||
|
p = QPainter(dev)
|
||||||
|
if isinstance(dev, PdfDevice):
|
||||||
|
dev.init_page()
|
||||||
|
xmax, ymax = p.viewport().width(), p.viewport().height()
|
||||||
|
try:
|
||||||
|
func(p, xmax, ymax)
|
||||||
|
finally:
|
||||||
|
p.end()
|
||||||
|
if isinstance(dev, PdfDevice):
|
||||||
|
if dev.engine.errors_occurred:
|
||||||
|
raise SystemExit(1)
|
||||||
|
|
||||||
|
def brush(p, xmax, ymax):
|
||||||
|
x = 0
|
||||||
|
y = 0
|
||||||
|
w = xmax/2
|
||||||
|
g = QLinearGradient(QPointF(x, y+w/3), QPointF(x, y+(2*w/3)))
|
||||||
|
g.setColorAt(0, QColor('#f00'))
|
||||||
|
g.setColorAt(0.5, QColor('#fff'))
|
||||||
|
g.setColorAt(1, QColor('#00f'))
|
||||||
|
g.setSpread(g.ReflectSpread)
|
||||||
|
p.fillRect(x, y, w, w, QBrush(g))
|
||||||
|
p.drawRect(x, y, w, w)
|
||||||
|
|
||||||
|
def pen(p, xmax, ymax):
|
||||||
|
pix = QPixmap(I('console.png'))
|
||||||
|
pen = QPen(QBrush(pix), 60)
|
||||||
|
p.setPen(pen)
|
||||||
|
p.drawRect(0, xmax/3, xmax/3, xmax/2)
|
||||||
|
|
||||||
|
def text(p, xmax, ymax):
|
||||||
|
f = p.font()
|
||||||
|
f.setPixelSize(24)
|
||||||
|
f.setFamily('Candara')
|
||||||
|
p.setFont(f)
|
||||||
|
p.drawText(QPoint(0, 100),
|
||||||
|
'Test intra glyph spacing ffagain imceo')
|
||||||
|
|
||||||
|
def main():
|
||||||
|
app = QApplication([])
|
||||||
|
app
|
||||||
|
tdir = os.path.abspath('.')
|
||||||
|
pdf = os.path.join(tdir, 'painter.pdf')
|
||||||
|
func = brush
|
||||||
|
dpi = 100
|
||||||
|
with open(pdf, 'wb') as f:
|
||||||
|
dev = PdfDevice(f, xdpi=dpi, ydpi=dpi, compress=False)
|
||||||
|
img = QImage(dev.width(), dev.height(),
|
||||||
|
QImage.Format_ARGB32_Premultiplied)
|
||||||
|
img.setDotsPerMeterX(dpi*39.37)
|
||||||
|
img.setDotsPerMeterY(dpi*39.37)
|
||||||
|
img.fill(Qt.white)
|
||||||
|
run(dev, func)
|
||||||
|
run(img, func)
|
||||||
|
path = os.path.join(tdir, 'painter.png')
|
||||||
|
img.save(path)
|
||||||
|
print ('PDF written to:', pdf)
|
||||||
|
print ('Image written to:', path)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
||||||
|
|
@ -766,6 +766,26 @@ class Translator(QTranslator):
|
|||||||
gui_thread = None
|
gui_thread = None
|
||||||
|
|
||||||
qt_app = None
|
qt_app = None
|
||||||
|
|
||||||
|
def load_builtin_fonts():
|
||||||
|
global _rating_font
|
||||||
|
# Load the builtin fonts and any fonts added to calibre by the user to
|
||||||
|
# Qt
|
||||||
|
for ff in glob.glob(P('fonts/liberation/*.?tf')) + \
|
||||||
|
[P('fonts/calibreSymbols.otf')] + \
|
||||||
|
glob.glob(os.path.join(config_dir, 'fonts', '*.?tf')):
|
||||||
|
if ff.rpartition('.')[-1].lower() in {'ttf', 'otf'}:
|
||||||
|
with open(ff, 'rb') as s:
|
||||||
|
# Windows requires font files to be executable for them to be
|
||||||
|
# loaded successfully, so we use the in memory loader
|
||||||
|
fid = QFontDatabase.addApplicationFontFromData(s.read())
|
||||||
|
if fid > -1:
|
||||||
|
fam = QFontDatabase.applicationFontFamilies(fid)
|
||||||
|
fam = set(map(unicode, fam))
|
||||||
|
if u'calibre Symbols' in fam:
|
||||||
|
_rating_font = u'calibre Symbols'
|
||||||
|
|
||||||
|
|
||||||
class Application(QApplication):
|
class Application(QApplication):
|
||||||
|
|
||||||
def __init__(self, args, force_calibre_style=False,
|
def __init__(self, args, force_calibre_style=False,
|
||||||
@ -798,27 +818,12 @@ class Application(QApplication):
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def load_builtin_fonts(self, scan_for_fonts=False):
|
def load_builtin_fonts(self, scan_for_fonts=False):
|
||||||
global _rating_font
|
|
||||||
if scan_for_fonts:
|
if scan_for_fonts:
|
||||||
from calibre.utils.fonts.scanner import font_scanner
|
from calibre.utils.fonts.scanner import font_scanner
|
||||||
# Start scanning the users computer for fonts
|
# Start scanning the users computer for fonts
|
||||||
font_scanner
|
font_scanner
|
||||||
|
|
||||||
# Load the builtin fonts and any fonts added to calibre by the user to
|
load_builtin_fonts()
|
||||||
# Qt
|
|
||||||
for ff in glob.glob(P('fonts/liberation/*.?tf')) + \
|
|
||||||
[P('fonts/calibreSymbols.otf')] + \
|
|
||||||
glob.glob(os.path.join(config_dir, 'fonts', '*.?tf')):
|
|
||||||
if ff.rpartition('.')[-1].lower() in {'ttf', 'otf'}:
|
|
||||||
with open(ff, 'rb') as s:
|
|
||||||
# Windows requires font files to be executable for them to be
|
|
||||||
# loaded successfully, so we use the in memory loader
|
|
||||||
fid = QFontDatabase.addApplicationFontFromData(s.read())
|
|
||||||
if fid > -1:
|
|
||||||
fam = QFontDatabase.applicationFontFamilies(fid)
|
|
||||||
fam = set(map(unicode, fam))
|
|
||||||
if u'calibre Symbols' in fam:
|
|
||||||
_rating_font = u'calibre Symbols'
|
|
||||||
|
|
||||||
def load_calibre_style(self):
|
def load_calibre_style(self):
|
||||||
# On OS X QtCurve resets the palette, so we preserve it explicitly
|
# On OS X QtCurve resets the palette, so we preserve it explicitly
|
||||||
|
@ -169,6 +169,10 @@ class ChooseLibraryAction(InterfaceAction):
|
|||||||
|
|
||||||
self.choose_menu = self.qaction.menu()
|
self.choose_menu = self.qaction.menu()
|
||||||
|
|
||||||
|
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
|
||||||
|
None, None), attr='action_pick_random')
|
||||||
|
ac.triggered.connect(self.pick_random)
|
||||||
|
|
||||||
if not os.environ.get('CALIBRE_OVERRIDE_DATABASE_PATH', None):
|
if not os.environ.get('CALIBRE_OVERRIDE_DATABASE_PATH', None):
|
||||||
self.choose_menu.addAction(self.action_choose)
|
self.choose_menu.addAction(self.action_choose)
|
||||||
|
|
||||||
@ -176,13 +180,11 @@ class ChooseLibraryAction(InterfaceAction):
|
|||||||
self.quick_menu_action = self.choose_menu.addMenu(self.quick_menu)
|
self.quick_menu_action = self.choose_menu.addMenu(self.quick_menu)
|
||||||
self.rename_menu = QMenu(_('Rename library'))
|
self.rename_menu = QMenu(_('Rename library'))
|
||||||
self.rename_menu_action = self.choose_menu.addMenu(self.rename_menu)
|
self.rename_menu_action = self.choose_menu.addMenu(self.rename_menu)
|
||||||
|
self.choose_menu.addAction(ac)
|
||||||
self.delete_menu = QMenu(_('Remove library'))
|
self.delete_menu = QMenu(_('Remove library'))
|
||||||
self.delete_menu_action = self.choose_menu.addMenu(self.delete_menu)
|
self.delete_menu_action = self.choose_menu.addMenu(self.delete_menu)
|
||||||
|
else:
|
||||||
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
|
self.choose_menu.addAction(ac)
|
||||||
None, None), attr='action_pick_random')
|
|
||||||
ac.triggered.connect(self.pick_random)
|
|
||||||
self.choose_menu.addAction(ac)
|
|
||||||
|
|
||||||
self.rename_separator = self.choose_menu.addSeparator()
|
self.rename_separator = self.choose_menu.addSeparator()
|
||||||
|
|
||||||
|
@ -43,14 +43,16 @@ class StoreAction(InterfaceAction):
|
|||||||
icon.addFile(I('donate.png'), QSize(16, 16))
|
icon.addFile(I('donate.png'), QSize(16, 16))
|
||||||
for n, p in sorted(self.gui.istores.items(), key=lambda x: x[0].lower()):
|
for n, p in sorted(self.gui.istores.items(), key=lambda x: x[0].lower()):
|
||||||
if p.base_plugin.affiliate:
|
if p.base_plugin.affiliate:
|
||||||
self.store_list_menu.addAction(icon, n, partial(self.open_store, p))
|
self.store_list_menu.addAction(icon, n,
|
||||||
|
partial(self.open_store, n))
|
||||||
else:
|
else:
|
||||||
self.store_list_menu.addAction(n, partial(self.open_store, p))
|
self.store_list_menu.addAction(n, partial(self.open_store, n))
|
||||||
|
|
||||||
def do_search(self):
|
def do_search(self):
|
||||||
return self.search()
|
return self.search()
|
||||||
|
|
||||||
def search(self, query=''):
|
def search(self, query=''):
|
||||||
|
self.gui.istores.check_for_updates()
|
||||||
self.show_disclaimer()
|
self.show_disclaimer()
|
||||||
from calibre.gui2.store.search.search import SearchDialog
|
from calibre.gui2.store.search.search import SearchDialog
|
||||||
sd = SearchDialog(self.gui, self.gui, query)
|
sd = SearchDialog(self.gui, self.gui, query)
|
||||||
@ -125,9 +127,13 @@ class StoreAction(InterfaceAction):
|
|||||||
self.gui.load_store_plugins()
|
self.gui.load_store_plugins()
|
||||||
self.load_menu()
|
self.load_menu()
|
||||||
|
|
||||||
def open_store(self, store_plugin):
|
def open_store(self, store_plugin_name):
|
||||||
|
self.gui.istores.check_for_updates()
|
||||||
self.show_disclaimer()
|
self.show_disclaimer()
|
||||||
store_plugin.open(self.gui)
|
# It's not too important that the updated plugin have finished loading
|
||||||
|
# at this point
|
||||||
|
self.gui.istores.join(1.0)
|
||||||
|
self.gui.istores[store_plugin_name].open(self.gui)
|
||||||
|
|
||||||
def show_disclaimer(self):
|
def show_disclaimer(self):
|
||||||
confirm(('<p>' +
|
confirm(('<p>' +
|
||||||
|
@ -8,10 +8,10 @@ from functools import partial
|
|||||||
from PyQt4.Qt import QThread, QObject, Qt, QProgressDialog, pyqtSignal, QTimer
|
from PyQt4.Qt import QThread, QObject, Qt, QProgressDialog, pyqtSignal, QTimer
|
||||||
|
|
||||||
from calibre.gui2.dialogs.progress import ProgressDialog
|
from calibre.gui2.dialogs.progress import ProgressDialog
|
||||||
from calibre.gui2 import (question_dialog, error_dialog, info_dialog, gprefs,
|
from calibre.gui2 import (error_dialog, info_dialog, gprefs,
|
||||||
warning_dialog, available_width)
|
warning_dialog, available_width)
|
||||||
from calibre.ebooks.metadata.opf2 import OPF
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
from calibre.ebooks.metadata import MetaInformation, authors_to_string
|
from calibre.ebooks.metadata import MetaInformation
|
||||||
from calibre.constants import preferred_encoding, filesystem_encoding, DEBUG
|
from calibre.constants import preferred_encoding, filesystem_encoding, DEBUG
|
||||||
from calibre.utils.config import prefs
|
from calibre.utils.config import prefs
|
||||||
from calibre import prints, force_unicode, as_unicode
|
from calibre import prints, force_unicode, as_unicode
|
||||||
@ -391,25 +391,10 @@ class Adder(QObject): # {{{
|
|||||||
if not duplicates:
|
if not duplicates:
|
||||||
return self.duplicates_processed()
|
return self.duplicates_processed()
|
||||||
self.pd.hide()
|
self.pd.hide()
|
||||||
duplicate_message = []
|
from calibre.gui2.dialogs.duplicates import DuplicatesQuestion
|
||||||
for x in duplicates:
|
d = DuplicatesQuestion(self.db, duplicates, self._parent)
|
||||||
duplicate_message.append(_('Already in calibre:'))
|
duplicates = tuple(d.duplicates)
|
||||||
matching_books = self.db.books_with_same_title(x[0])
|
if duplicates:
|
||||||
for book_id in matching_books:
|
|
||||||
aut = [a.replace('|', ',') for a in (self.db.authors(book_id,
|
|
||||||
index_is_id=True) or '').split(',')]
|
|
||||||
duplicate_message.append('\t'+ _('%(title)s by %(author)s')%
|
|
||||||
dict(title=self.db.title(book_id, index_is_id=True),
|
|
||||||
author=authors_to_string(aut)))
|
|
||||||
duplicate_message.append(_('You are trying to add:'))
|
|
||||||
duplicate_message.append('\t'+_('%(title)s by %(author)s')%
|
|
||||||
dict(title=x[0].title,
|
|
||||||
author=x[0].format_field('authors')[1]))
|
|
||||||
duplicate_message.append('')
|
|
||||||
if question_dialog(self._parent, _('Duplicates found!'),
|
|
||||||
_('Books with the same title as the following already '
|
|
||||||
'exist in calibre. Add them anyway?'),
|
|
||||||
'\n'.join(duplicate_message)):
|
|
||||||
pd = QProgressDialog(_('Adding duplicates...'), '', 0, len(duplicates),
|
pd = QProgressDialog(_('Adding duplicates...'), '', 0, len(duplicates),
|
||||||
self._parent)
|
self._parent)
|
||||||
pd.setCancelButton(None)
|
pd.setCancelButton(None)
|
||||||
|
@ -411,7 +411,7 @@
|
|||||||
<item row="6" column="3" colspan="2">
|
<item row="6" column="3" colspan="2">
|
||||||
<widget class="QCheckBox" name="opt_subset_embedded_fonts">
|
<widget class="QCheckBox" name="opt_subset_embedded_fonts">
|
||||||
<property name="text">
|
<property name="text">
|
||||||
<string>&Subset all embedded fonts (Experimental)</string>
|
<string>&Subset all embedded fonts</string>
|
||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
|
@ -26,6 +26,7 @@ def create_opf_file(db, book_id):
|
|||||||
mi.application_id = uuid.uuid4()
|
mi.application_id = uuid.uuid4()
|
||||||
old_cover = mi.cover
|
old_cover = mi.cover
|
||||||
mi.cover = None
|
mi.cover = None
|
||||||
|
mi.application_id = mi.uuid
|
||||||
raw = metadata_to_opf(mi)
|
raw = metadata_to_opf(mi)
|
||||||
mi.cover = old_cover
|
mi.cover = old_cover
|
||||||
opf_file = PersistentTemporaryFile('.opf')
|
opf_file = PersistentTemporaryFile('.opf')
|
||||||
|
@ -33,7 +33,10 @@ from calibre.utils.config import prefs
|
|||||||
from calibre.utils.logging import Log
|
from calibre.utils.logging import Log
|
||||||
|
|
||||||
class NoSupportedInputFormats(Exception):
|
class NoSupportedInputFormats(Exception):
|
||||||
pass
|
|
||||||
|
def __init__(self, available_formats):
|
||||||
|
Exception.__init__(self)
|
||||||
|
self.available_formats = available_formats
|
||||||
|
|
||||||
def sort_formats_by_preference(formats, prefs):
|
def sort_formats_by_preference(formats, prefs):
|
||||||
uprefs = [x.upper() for x in prefs]
|
uprefs = [x.upper() for x in prefs]
|
||||||
@ -86,7 +89,7 @@ def get_supported_input_formats_for_book(db, book_id):
|
|||||||
input_formats = set([x.lower() for x in supported_input_formats()])
|
input_formats = set([x.lower() for x in supported_input_formats()])
|
||||||
input_formats = sorted(available_formats.intersection(input_formats))
|
input_formats = sorted(available_formats.intersection(input_formats))
|
||||||
if not input_formats:
|
if not input_formats:
|
||||||
raise NoSupportedInputFormats
|
raise NoSupportedInputFormats(tuple(x for x in available_formats if x))
|
||||||
return input_formats
|
return input_formats
|
||||||
|
|
||||||
|
|
||||||
|
@ -369,6 +369,7 @@ class Series(Base):
|
|||||||
w.setMinimumContentsLength(25)
|
w.setMinimumContentsLength(25)
|
||||||
self.name_widget = w
|
self.name_widget = w
|
||||||
self.widgets = [QLabel('&'+self.col_metadata['name']+':', parent), w]
|
self.widgets = [QLabel('&'+self.col_metadata['name']+':', parent), w]
|
||||||
|
w.editTextChanged.connect(self.series_changed)
|
||||||
|
|
||||||
self.widgets.append(QLabel('&'+self.col_metadata['name']+_(' index:'), parent))
|
self.widgets.append(QLabel('&'+self.col_metadata['name']+_(' index:'), parent))
|
||||||
w = QDoubleSpinBox(parent)
|
w = QDoubleSpinBox(parent)
|
||||||
@ -382,33 +383,42 @@ class Series(Base):
|
|||||||
values = list(self.db.all_custom(num=self.col_id))
|
values = list(self.db.all_custom(num=self.col_id))
|
||||||
values.sort(key=sort_key)
|
values.sort(key=sort_key)
|
||||||
val = self.db.get_custom(book_id, num=self.col_id, index_is_id=True)
|
val = self.db.get_custom(book_id, num=self.col_id, index_is_id=True)
|
||||||
s_index = self.db.get_custom_extra(book_id, num=self.col_id, index_is_id=True)
|
|
||||||
if s_index is None:
|
|
||||||
s_index = 0.0
|
|
||||||
self.idx_widget.setValue(s_index)
|
|
||||||
self.initial_index = s_index
|
|
||||||
self.initial_val = val
|
self.initial_val = val
|
||||||
|
s_index = self.db.get_custom_extra(book_id, num=self.col_id, index_is_id=True)
|
||||||
|
self.initial_index = s_index
|
||||||
|
try:
|
||||||
|
s_index = float(s_index)
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
s_index = 1.0
|
||||||
|
self.idx_widget.setValue(s_index)
|
||||||
val = self.normalize_db_val(val)
|
val = self.normalize_db_val(val)
|
||||||
|
self.name_widget.blockSignals(True)
|
||||||
self.name_widget.update_items_cache(values)
|
self.name_widget.update_items_cache(values)
|
||||||
self.name_widget.show_initial_value(val)
|
self.name_widget.show_initial_value(val)
|
||||||
|
self.name_widget.blockSignals(False)
|
||||||
|
|
||||||
def getter(self):
|
def getter(self):
|
||||||
n = unicode(self.name_widget.currentText()).strip()
|
n = unicode(self.name_widget.currentText()).strip()
|
||||||
i = self.idx_widget.value()
|
i = self.idx_widget.value()
|
||||||
return n, i
|
return n, i
|
||||||
|
|
||||||
|
def series_changed(self, val):
|
||||||
|
val, s_index = self.gui_val
|
||||||
|
if tweaks['series_index_auto_increment'] == 'no_change':
|
||||||
|
pass
|
||||||
|
elif tweaks['series_index_auto_increment'] == 'const':
|
||||||
|
s_index = 1.0
|
||||||
|
else:
|
||||||
|
s_index = self.db.get_next_cc_series_num_for(val,
|
||||||
|
num=self.col_id)
|
||||||
|
self.idx_widget.setValue(s_index)
|
||||||
|
|
||||||
def commit(self, book_id, notify=False):
|
def commit(self, book_id, notify=False):
|
||||||
val, s_index = self.gui_val
|
val, s_index = self.gui_val
|
||||||
val = self.normalize_ui_val(val)
|
val = self.normalize_ui_val(val)
|
||||||
if val != self.initial_val or s_index != self.initial_index:
|
if val != self.initial_val or s_index != self.initial_index:
|
||||||
if val == '':
|
if val == '':
|
||||||
val = s_index = None
|
val = s_index = None
|
||||||
elif s_index == 0.0:
|
|
||||||
if tweaks['series_index_auto_increment'] != 'const':
|
|
||||||
s_index = self.db.get_next_cc_series_num_for(val,
|
|
||||||
num=self.col_id)
|
|
||||||
else:
|
|
||||||
s_index = None
|
|
||||||
return self.db.set_custom(book_id, val, extra=s_index, num=self.col_id,
|
return self.db.set_custom(book_id, val, extra=s_index, num=self.col_id,
|
||||||
notify=notify, commit=False, allow_case_change=True)
|
notify=notify, commit=False, allow_case_change=True)
|
||||||
else:
|
else:
|
||||||
|
118
src/calibre/gui2/dialogs/duplicates.py
Normal file
118
src/calibre/gui2/dialogs/duplicates.py
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
from PyQt4.Qt import (QDialog, QGridLayout, QIcon, QLabel, QTreeWidget,
|
||||||
|
QTreeWidgetItem, Qt, QFont, QDialogButtonBox)
|
||||||
|
|
||||||
|
from calibre.ebooks.metadata import authors_to_string
|
||||||
|
|
||||||
|
class DuplicatesQuestion(QDialog):
|
||||||
|
|
||||||
|
def __init__(self, db, duplicates, parent=None):
|
||||||
|
QDialog.__init__(self, parent)
|
||||||
|
self.l = l = QGridLayout()
|
||||||
|
self.setLayout(l)
|
||||||
|
self.setWindowTitle(_('Duplicates found!'))
|
||||||
|
self.i = i = QIcon(I('dialog_question.png'))
|
||||||
|
self.setWindowIcon(i)
|
||||||
|
|
||||||
|
self.l1 = l1 = QLabel()
|
||||||
|
self.l2 = l2 = QLabel(_(
|
||||||
|
'Books with the same titles as the following already '
|
||||||
|
'exist in calibre. Select which books you want added anyway.'))
|
||||||
|
l2.setWordWrap(True)
|
||||||
|
l1.setPixmap(i.pixmap(128, 128))
|
||||||
|
l.addWidget(l1, 0, 0)
|
||||||
|
l.addWidget(l2, 0, 1)
|
||||||
|
|
||||||
|
self.dup_list = dl = QTreeWidget(self)
|
||||||
|
l.addWidget(dl, 1, 0, 1, 2)
|
||||||
|
dl.setHeaderHidden(True)
|
||||||
|
dl.addTopLevelItems(list(self.process_duplicates(db, duplicates)))
|
||||||
|
dl.expandAll()
|
||||||
|
dl.setIndentation(30)
|
||||||
|
|
||||||
|
self.bb = bb = QDialogButtonBox(QDialogButtonBox.Ok|QDialogButtonBox.Cancel)
|
||||||
|
bb.accepted.connect(self.accept)
|
||||||
|
bb.rejected.connect(self.reject)
|
||||||
|
l.addWidget(bb, 2, 0, 1, 2)
|
||||||
|
self.ab = ab = bb.addButton(_('Select &all'), bb.ActionRole)
|
||||||
|
ab.clicked.connect(self.select_all)
|
||||||
|
self.nb = ab = bb.addButton(_('Select &none'), bb.ActionRole)
|
||||||
|
ab.clicked.connect(self.select_none)
|
||||||
|
|
||||||
|
self.resize(self.sizeHint())
|
||||||
|
self.exec_()
|
||||||
|
|
||||||
|
def select_all(self):
|
||||||
|
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||||
|
x = self.dup_list.topLevelItem(i)
|
||||||
|
x.setCheckState(0, Qt.Checked)
|
||||||
|
|
||||||
|
def select_none(self):
|
||||||
|
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||||
|
x = self.dup_list.topLevelItem(i)
|
||||||
|
x.setCheckState(0, Qt.Unchecked)
|
||||||
|
|
||||||
|
def reject(self):
|
||||||
|
self.select_none()
|
||||||
|
QDialog.reject(self)
|
||||||
|
|
||||||
|
def process_duplicates(self, db, duplicates):
|
||||||
|
ta = _('%(title)s by %(author)s')
|
||||||
|
bf = QFont(self.dup_list.font())
|
||||||
|
bf.setBold(True)
|
||||||
|
itf = QFont(self.dup_list.font())
|
||||||
|
itf.setItalic(True)
|
||||||
|
|
||||||
|
for mi, cover, formats in duplicates:
|
||||||
|
item = QTreeWidgetItem([ta%dict(
|
||||||
|
title=mi.title, author=mi.format_field('authors')[1])] , 0)
|
||||||
|
item.setCheckState(0, Qt.Checked)
|
||||||
|
item.setFlags(Qt.ItemIsEnabled|Qt.ItemIsUserCheckable)
|
||||||
|
item.setData(0, Qt.FontRole, bf)
|
||||||
|
item.setData(0, Qt.UserRole, (mi, cover, formats))
|
||||||
|
matching_books = db.books_with_same_title(mi)
|
||||||
|
|
||||||
|
def add_child(text):
|
||||||
|
c = QTreeWidgetItem([text], 1)
|
||||||
|
c.setFlags(Qt.ItemIsEnabled)
|
||||||
|
item.addChild(c)
|
||||||
|
return c
|
||||||
|
|
||||||
|
add_child(_('Already in calibre:')).setData(0, Qt.FontRole, itf)
|
||||||
|
|
||||||
|
for book_id in matching_books:
|
||||||
|
aut = [a.replace('|', ',') for a in (db.authors(book_id,
|
||||||
|
index_is_id=True) or '').split(',')]
|
||||||
|
add_child(ta%dict(
|
||||||
|
title=db.title(book_id, index_is_id=True),
|
||||||
|
author=authors_to_string(aut)))
|
||||||
|
add_child('')
|
||||||
|
|
||||||
|
yield item
|
||||||
|
|
||||||
|
@property
|
||||||
|
def duplicates(self):
|
||||||
|
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||||
|
x = self.dup_list.topLevelItem(i)
|
||||||
|
if x.checkState(0) == Qt.Checked:
|
||||||
|
yield x.data(0, Qt.UserRole).toPyObject()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
from PyQt4.Qt import QApplication
|
||||||
|
from calibre.ebooks.metadata.book.base import Metadata as M
|
||||||
|
from calibre.library import db
|
||||||
|
|
||||||
|
app = QApplication([])
|
||||||
|
db = db()
|
||||||
|
d = DuplicatesQuestion(db, [(M('Life of Pi', ['Yann Martel']), None, None),
|
||||||
|
(M('Heirs of the blade', ['Adrian Tchaikovsky']), None, None)])
|
||||||
|
print (tuple(d.duplicates))
|
||||||
|
|
@ -1109,8 +1109,8 @@ not multiple and the destination field is multiple</string>
|
|||||||
<rect>
|
<rect>
|
||||||
<x>0</x>
|
<x>0</x>
|
||||||
<y>0</y>
|
<y>0</y>
|
||||||
<width>205</width>
|
<width>934</width>
|
||||||
<height>66</height>
|
<height>213</height>
|
||||||
</rect>
|
</rect>
|
||||||
</property>
|
</property>
|
||||||
<layout class="QGridLayout" name="testgrid">
|
<layout class="QGridLayout" name="testgrid">
|
||||||
@ -1269,8 +1269,8 @@ not multiple and the destination field is multiple</string>
|
|||||||
<slot>accept()</slot>
|
<slot>accept()</slot>
|
||||||
<hints>
|
<hints>
|
||||||
<hint type="sourcelabel">
|
<hint type="sourcelabel">
|
||||||
<x>252</x>
|
<x>258</x>
|
||||||
<y>382</y>
|
<y>638</y>
|
||||||
</hint>
|
</hint>
|
||||||
<hint type="destinationlabel">
|
<hint type="destinationlabel">
|
||||||
<x>157</x>
|
<x>157</x>
|
||||||
@ -1285,8 +1285,8 @@ not multiple and the destination field is multiple</string>
|
|||||||
<slot>reject()</slot>
|
<slot>reject()</slot>
|
||||||
<hints>
|
<hints>
|
||||||
<hint type="sourcelabel">
|
<hint type="sourcelabel">
|
||||||
<x>320</x>
|
<x>326</x>
|
||||||
<y>382</y>
|
<y>638</y>
|
||||||
</hint>
|
</hint>
|
||||||
<hint type="destinationlabel">
|
<hint type="destinationlabel">
|
||||||
<x>286</x>
|
<x>286</x>
|
||||||
@ -1294,5 +1294,37 @@ not multiple and the destination field is multiple</string>
|
|||||||
</hint>
|
</hint>
|
||||||
</hints>
|
</hints>
|
||||||
</connection>
|
</connection>
|
||||||
|
<connection>
|
||||||
|
<sender>remove_all_tags</sender>
|
||||||
|
<signal>toggled(bool)</signal>
|
||||||
|
<receiver>remove_tags</receiver>
|
||||||
|
<slot>setDisabled(bool)</slot>
|
||||||
|
<hints>
|
||||||
|
<hint type="sourcelabel">
|
||||||
|
<x>888</x>
|
||||||
|
<y>266</y>
|
||||||
|
</hint>
|
||||||
|
<hint type="destinationlabel">
|
||||||
|
<x>814</x>
|
||||||
|
<y>268</y>
|
||||||
|
</hint>
|
||||||
|
</hints>
|
||||||
|
</connection>
|
||||||
|
<connection>
|
||||||
|
<sender>clear_languages</sender>
|
||||||
|
<signal>toggled(bool)</signal>
|
||||||
|
<receiver>languages</receiver>
|
||||||
|
<slot>setDisabled(bool)</slot>
|
||||||
|
<hints>
|
||||||
|
<hint type="sourcelabel">
|
||||||
|
<x>874</x>
|
||||||
|
<y>418</y>
|
||||||
|
</hint>
|
||||||
|
<hint type="destinationlabel">
|
||||||
|
<x>817</x>
|
||||||
|
<y>420</y>
|
||||||
|
</hint>
|
||||||
|
</hints>
|
||||||
|
</connection>
|
||||||
</connections>
|
</connections>
|
||||||
</ui>
|
</ui>
|
||||||
|
@ -519,6 +519,7 @@ class PluginUpdaterDialog(SizePersistedDialog):
|
|||||||
self.description.setFrameStyle(QFrame.Panel | QFrame.Sunken)
|
self.description.setFrameStyle(QFrame.Panel | QFrame.Sunken)
|
||||||
self.description.setAlignment(Qt.AlignTop | Qt.AlignLeft)
|
self.description.setAlignment(Qt.AlignTop | Qt.AlignLeft)
|
||||||
self.description.setMinimumHeight(40)
|
self.description.setMinimumHeight(40)
|
||||||
|
self.description.setWordWrap(True)
|
||||||
layout.addWidget(self.description)
|
layout.addWidget(self.description)
|
||||||
|
|
||||||
self.button_box = QDialogButtonBox(QDialogButtonBox.Close)
|
self.button_box = QDialogButtonBox(QDialogButtonBox.Close)
|
||||||
|
@ -1,10 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import,
|
|
||||||
print_function)
|
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
|
||||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
|
||||||
__docformat__ = 'restructuredtext en'
|
|
||||||
|
|
||||||
|
|
@ -201,6 +201,7 @@ class SearchBar(QWidget): # {{{
|
|||||||
x.setObjectName("search")
|
x.setObjectName("search")
|
||||||
x.setToolTip(_("<p>Search the list of books by title, author, publisher, "
|
x.setToolTip(_("<p>Search the list of books by title, author, publisher, "
|
||||||
"tags, comments, etc.<br><br>Words separated by spaces are ANDed"))
|
"tags, comments, etc.<br><br>Words separated by spaces are ANDed"))
|
||||||
|
x.setMinimumContentsLength(10)
|
||||||
l.addWidget(x)
|
l.addWidget(x)
|
||||||
|
|
||||||
self.search_button = QToolButton()
|
self.search_button = QToolButton()
|
||||||
@ -225,7 +226,7 @@ class SearchBar(QWidget): # {{{
|
|||||||
|
|
||||||
x = parent.saved_search = SavedSearchBox(self)
|
x = parent.saved_search = SavedSearchBox(self)
|
||||||
x.setMaximumSize(QSize(150, 16777215))
|
x.setMaximumSize(QSize(150, 16777215))
|
||||||
x.setMinimumContentsLength(15)
|
x.setMinimumContentsLength(10)
|
||||||
x.setObjectName("saved_search")
|
x.setObjectName("saved_search")
|
||||||
l.addWidget(x)
|
l.addWidget(x)
|
||||||
|
|
||||||
|
@ -88,13 +88,16 @@ class DateDelegate(QStyledItemDelegate): # {{{
|
|||||||
|
|
||||||
class PubDateDelegate(QStyledItemDelegate): # {{{
|
class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||||
|
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
QStyledItemDelegate.__init__(self, *args, **kwargs)
|
||||||
|
self.format = tweaks['gui_pubdate_display_format']
|
||||||
|
if self.format is None:
|
||||||
|
self.format = 'MMM yyyy'
|
||||||
|
|
||||||
def displayText(self, val, locale):
|
def displayText(self, val, locale):
|
||||||
d = val.toDateTime()
|
d = val.toDateTime()
|
||||||
if d <= UNDEFINED_QDATETIME:
|
if d <= UNDEFINED_QDATETIME:
|
||||||
return ''
|
return ''
|
||||||
self.format = tweaks['gui_pubdate_display_format']
|
|
||||||
if self.format is None:
|
|
||||||
self.format = 'MMM yyyy'
|
|
||||||
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
||||||
|
|
||||||
def createEditor(self, parent, option, index):
|
def createEditor(self, parent, option, index):
|
||||||
|
@ -7,8 +7,10 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from PyQt4.Qt import (QLabel, QVBoxLayout, QListWidget, QListWidgetItem, Qt)
|
from PyQt4.Qt import (QLabel, QVBoxLayout, QListWidget, QListWidgetItem, Qt,
|
||||||
|
QIcon)
|
||||||
|
|
||||||
|
from calibre.customize.ui import enable_plugin
|
||||||
from calibre.gui2.preferences import ConfigWidgetBase, test_widget
|
from calibre.gui2.preferences import ConfigWidgetBase, test_widget
|
||||||
|
|
||||||
class ConfigWidget(ConfigWidgetBase):
|
class ConfigWidget(ConfigWidgetBase):
|
||||||
@ -31,6 +33,18 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
f.itemChanged.connect(self.changed_signal)
|
f.itemChanged.connect(self.changed_signal)
|
||||||
f.itemDoubleClicked.connect(self.toggle_item)
|
f.itemDoubleClicked.connect(self.toggle_item)
|
||||||
|
|
||||||
|
self.la2 = la = QLabel(_(
|
||||||
|
'The list of device plugins you have disabled. Uncheck an entry '
|
||||||
|
'to enable the plugin. calibre cannot detect devices that are '
|
||||||
|
'managed by disabled plugins.'))
|
||||||
|
la.setWordWrap(True)
|
||||||
|
l.addWidget(la)
|
||||||
|
|
||||||
|
self.device_plugins = f = QListWidget(f)
|
||||||
|
l.addWidget(f)
|
||||||
|
f.itemChanged.connect(self.changed_signal)
|
||||||
|
f.itemDoubleClicked.connect(self.toggle_item)
|
||||||
|
|
||||||
def toggle_item(self, item):
|
def toggle_item(self, item):
|
||||||
item.setCheckState(Qt.Checked if item.checkState() == Qt.Unchecked else
|
item.setCheckState(Qt.Checked if item.checkState() == Qt.Unchecked else
|
||||||
Qt.Unchecked)
|
Qt.Unchecked)
|
||||||
@ -46,6 +60,17 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
item.setCheckState(Qt.Checked)
|
item.setCheckState(Qt.Checked)
|
||||||
self.devices.blockSignals(False)
|
self.devices.blockSignals(False)
|
||||||
|
|
||||||
|
self.device_plugins.blockSignals(True)
|
||||||
|
for dev in self.gui.device_manager.disabled_device_plugins:
|
||||||
|
n = dev.get_gui_name()
|
||||||
|
item = QListWidgetItem(n, self.device_plugins)
|
||||||
|
item.setData(Qt.UserRole, dev)
|
||||||
|
item.setFlags(Qt.ItemIsEnabled|Qt.ItemIsUserCheckable|Qt.ItemIsSelectable)
|
||||||
|
item.setCheckState(Qt.Checked)
|
||||||
|
item.setIcon(QIcon(I('plugins.png')))
|
||||||
|
self.device_plugins.sortItems()
|
||||||
|
self.device_plugins.blockSignals(False)
|
||||||
|
|
||||||
def restore_defaults(self):
|
def restore_defaults(self):
|
||||||
if self.devices.count() > 0:
|
if self.devices.count() > 0:
|
||||||
self.devices.clear()
|
self.devices.clear()
|
||||||
@ -63,6 +88,12 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
for dev, bl in devs.iteritems():
|
for dev, bl in devs.iteritems():
|
||||||
dev.set_user_blacklisted_devices(bl)
|
dev.set_user_blacklisted_devices(bl)
|
||||||
|
|
||||||
|
for i in xrange(self.device_plugins.count()):
|
||||||
|
e = self.device_plugins.item(i)
|
||||||
|
dev = e.data(Qt.UserRole).toPyObject()
|
||||||
|
if e.checkState() == Qt.Unchecked:
|
||||||
|
enable_plugin(dev)
|
||||||
|
|
||||||
return True # Restart required
|
return True # Restart required
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -273,7 +273,7 @@
|
|||||||
<widget class="QLabel" name="label_13">
|
<widget class="QLabel" name="label_13">
|
||||||
<property name="text">
|
<property name="text">
|
||||||
<string><p>Remember to leave calibre running as the server only runs as long as calibre is running.
|
<string><p>Remember to leave calibre running as the server only runs as long as calibre is running.
|
||||||
<p>To connect to the calibre server from your device you should use a URL of the form <b>http://myhostname:8080</b> as a new catalog in the Stanza reader on your iPhone. Here myhostname should be either the fully qualified hostname or the IP address of the computer calibre is running on.</string>
|
<p>To connect to the calibre server from your device you should use a URL of the form <b>http://myhostname:8080</b>. Here myhostname should be either the fully qualified hostname or the IP address of the computer calibre is running on. If you want to access the server from anywhere in the world, you will have to setup port forwarding for it on your router.</string>
|
||||||
</property>
|
</property>
|
||||||
<property name="wordWrap">
|
<property name="wordWrap">
|
||||||
<bool>true</bool>
|
<bool>true</bool>
|
||||||
|
@ -49,13 +49,16 @@ class StorePlugin(object): # {{{
|
|||||||
See declined.txt for a list of stores that do not want to be included.
|
See declined.txt for a list of stores that do not want to be included.
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def __init__(self, gui, name):
|
minimum_calibre_version = (0, 9, 14)
|
||||||
from calibre.gui2 import JSONConfig
|
|
||||||
|
|
||||||
|
def __init__(self, gui, name, config=None, base_plugin=None):
|
||||||
self.gui = gui
|
self.gui = gui
|
||||||
self.name = name
|
self.name = name
|
||||||
self.base_plugin = None
|
self.base_plugin = base_plugin
|
||||||
self.config = JSONConfig('store/stores/' + ascii_filename(self.name))
|
if config is None:
|
||||||
|
from calibre.gui2 import JSONConfig
|
||||||
|
config = JSONConfig('store/stores/' + ascii_filename(self.name))
|
||||||
|
self.config = config
|
||||||
|
|
||||||
def open(self, gui, parent=None, detail_item=None, external=False):
|
def open(self, gui, parent=None, detail_item=None, external=False):
|
||||||
'''
|
'''
|
||||||
|
197
src/calibre/gui2/store/loader.py
Normal file
197
src/calibre/gui2/store/loader.py
Normal file
@ -0,0 +1,197 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import sys, time, io, re
|
||||||
|
from zlib import decompressobj
|
||||||
|
from collections import OrderedDict
|
||||||
|
from threading import Thread
|
||||||
|
from urllib import urlencode
|
||||||
|
|
||||||
|
from calibre import prints, browser
|
||||||
|
from calibre.constants import numeric_version, DEBUG
|
||||||
|
from calibre.gui2.store import StorePlugin
|
||||||
|
from calibre.utils.config import JSONConfig
|
||||||
|
|
||||||
|
class VersionMismatch(ValueError):
|
||||||
|
def __init__(self, ver):
|
||||||
|
ValueError.__init__(self, 'calibre too old')
|
||||||
|
self.ver = ver
|
||||||
|
|
||||||
|
def download_updates(ver_map={}, server='http://status.calibre-ebook.com'):
|
||||||
|
data = {k:type(u'')(v) for k, v in ver_map.iteritems()}
|
||||||
|
data['ver'] = '1'
|
||||||
|
url = '%s/stores?%s'%(server, urlencode(data))
|
||||||
|
br = browser()
|
||||||
|
# We use a timeout here to ensure the non-daemonic update thread does not
|
||||||
|
# cause calibre to hang indefinitely during shutdown
|
||||||
|
raw = br.open(url, timeout=4.0).read()
|
||||||
|
|
||||||
|
while raw:
|
||||||
|
name, raw = raw.partition(b'\0')[0::2]
|
||||||
|
name = name.decode('utf-8')
|
||||||
|
d = decompressobj()
|
||||||
|
src = d.decompress(raw)
|
||||||
|
src = src.decode('utf-8')
|
||||||
|
# Python complains if there is a coding declaration in a unicode string
|
||||||
|
src = re.sub(r'^#.*coding\s*[:=]\s*([-\w.]+)', '#', src, flags=re.MULTILINE)
|
||||||
|
# Translate newlines to \n
|
||||||
|
src = io.StringIO(src, newline=None).getvalue()
|
||||||
|
yield name, src
|
||||||
|
raw = d.unused_data
|
||||||
|
|
||||||
|
class Stores(OrderedDict):
|
||||||
|
|
||||||
|
CHECK_INTERVAL = 24 * 60 * 60
|
||||||
|
|
||||||
|
def builtins_loaded(self):
|
||||||
|
self.last_check_time = 0
|
||||||
|
self.version_map = {}
|
||||||
|
self.cached_version_map = {}
|
||||||
|
self.name_rmap = {}
|
||||||
|
for key, val in self.iteritems():
|
||||||
|
prefix, name = val.__module__.rpartition('.')[0::2]
|
||||||
|
if prefix == 'calibre.gui2.store.stores' and name.endswith('_plugin'):
|
||||||
|
module = sys.modules[val.__module__]
|
||||||
|
sv = getattr(module, 'store_version', None)
|
||||||
|
if sv is not None:
|
||||||
|
name = name.rpartition('_')[0]
|
||||||
|
self.version_map[name] = sv
|
||||||
|
self.name_rmap[name] = key
|
||||||
|
self.cache_file = JSONConfig('store/plugin_cache')
|
||||||
|
self.load_cache()
|
||||||
|
|
||||||
|
def load_cache(self):
|
||||||
|
# Load plugins from on disk cache
|
||||||
|
remove = set()
|
||||||
|
pat = re.compile(r'^store_version\s*=\s*(\d+)', re.M)
|
||||||
|
for name, src in self.cache_file.iteritems():
|
||||||
|
try:
|
||||||
|
key = self.name_rmap[name]
|
||||||
|
except KeyError:
|
||||||
|
# Plugin has been disabled
|
||||||
|
m = pat.search(src[:512])
|
||||||
|
if m is not None:
|
||||||
|
try:
|
||||||
|
self.cached_version_map[name] = int(m.group(1))
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
pass
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
obj, ver = self.load_object(src, key)
|
||||||
|
except VersionMismatch as e:
|
||||||
|
self.cached_version_map[name] = e.ver
|
||||||
|
continue
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
prints('Failed to load cached store:', name)
|
||||||
|
traceback.print_exc()
|
||||||
|
else:
|
||||||
|
if not self.replace_plugin(ver, name, obj, 'cached'):
|
||||||
|
# Builtin plugin is newer than cached
|
||||||
|
remove.add(name)
|
||||||
|
|
||||||
|
if remove:
|
||||||
|
with self.cache_file:
|
||||||
|
for name in remove:
|
||||||
|
del self.cache_file[name]
|
||||||
|
|
||||||
|
def check_for_updates(self):
|
||||||
|
if hasattr(self, 'update_thread') and self.update_thread.is_alive():
|
||||||
|
return
|
||||||
|
if time.time() - self.last_check_time < self.CHECK_INTERVAL:
|
||||||
|
return
|
||||||
|
self.last_check_time = time.time()
|
||||||
|
try:
|
||||||
|
self.update_thread.start()
|
||||||
|
except (RuntimeError, AttributeError):
|
||||||
|
self.update_thread = Thread(target=self.do_update)
|
||||||
|
self.update_thread.start()
|
||||||
|
|
||||||
|
def join(self, timeout=None):
|
||||||
|
hasattr(self, 'update_thread') and self.update_thread.join(timeout)
|
||||||
|
|
||||||
|
def download_updates(self):
|
||||||
|
ver_map = {name:max(ver, self.cached_version_map.get(name, -1))
|
||||||
|
for name, ver in self.version_map.iteritems()}
|
||||||
|
try:
|
||||||
|
updates = download_updates(ver_map)
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
else:
|
||||||
|
for name, code in updates:
|
||||||
|
yield name, code
|
||||||
|
|
||||||
|
def do_update(self):
|
||||||
|
replacements = {}
|
||||||
|
|
||||||
|
for name, src in self.download_updates():
|
||||||
|
try:
|
||||||
|
key = self.name_rmap[name]
|
||||||
|
except KeyError:
|
||||||
|
# Plugin has been disabled
|
||||||
|
replacements[name] = src
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
obj, ver = self.load_object(src, key)
|
||||||
|
except VersionMismatch as e:
|
||||||
|
self.cached_version_map[name] = e.ver
|
||||||
|
replacements[name] = src
|
||||||
|
continue
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
prints('Failed to load downloaded store:', name)
|
||||||
|
traceback.print_exc()
|
||||||
|
else:
|
||||||
|
if self.replace_plugin(ver, name, obj, 'downloaded'):
|
||||||
|
replacements[name] = src
|
||||||
|
|
||||||
|
if replacements:
|
||||||
|
with self.cache_file:
|
||||||
|
for name, src in replacements.iteritems():
|
||||||
|
self.cache_file[name] = src
|
||||||
|
|
||||||
|
def replace_plugin(self, ver, name, obj, source):
|
||||||
|
if ver > self.version_map[name]:
|
||||||
|
if DEBUG:
|
||||||
|
prints('Loaded', source, 'store plugin for:',
|
||||||
|
self.name_rmap[name], 'at version:', ver)
|
||||||
|
self[self.name_rmap[name]] = obj
|
||||||
|
self.version_map[name] = ver
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def load_object(self, src, key):
|
||||||
|
namespace = {}
|
||||||
|
builtin = self[key]
|
||||||
|
exec src in namespace
|
||||||
|
ver = namespace['store_version']
|
||||||
|
cls = None
|
||||||
|
for x in namespace.itervalues():
|
||||||
|
if (isinstance(x, type) and issubclass(x, StorePlugin) and x is not
|
||||||
|
StorePlugin):
|
||||||
|
cls = x
|
||||||
|
break
|
||||||
|
if cls is None:
|
||||||
|
raise ValueError('No store plugin found')
|
||||||
|
if cls.minimum_calibre_version > numeric_version:
|
||||||
|
raise VersionMismatch(ver)
|
||||||
|
return cls(builtin.gui, builtin.name, config=builtin.config,
|
||||||
|
base_plugin=builtin.base_plugin), ver
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
st = time.time()
|
||||||
|
for name, code in download_updates():
|
||||||
|
print(name)
|
||||||
|
print(code)
|
||||||
|
print('\n', '_'*80, '\n', sep='')
|
||||||
|
print ('Time to download all plugins: %.2f'%( time.time() - st))
|
||||||
|
|
||||||
|
|
@ -194,6 +194,7 @@ class SearchDialog(QDialog, Ui_Dialog):
|
|||||||
query = self.clean_query(query)
|
query = self.clean_query(query)
|
||||||
shuffle(store_names)
|
shuffle(store_names)
|
||||||
# Add plugins that the user has checked to the search pool's work queue.
|
# Add plugins that the user has checked to the search pool's work queue.
|
||||||
|
self.gui.istores.join(4.0) # Wait for updated plugins to load
|
||||||
for n in store_names:
|
for n in store_names:
|
||||||
if self.store_checks[n].isChecked():
|
if self.store_checks[n].isChecked():
|
||||||
self.search_pool.add_task(query, n, self.gui.istores[n], self.max_results, self.timeout)
|
self.search_pool.add_task(query, n, self.gui.istores[n], self.max_results, self.timeout)
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
@ -21,4 +22,4 @@ class AmazonESKindleStore(AmazonUKKindleStore):
|
|||||||
'&linkCode=ur2&camp=3626&creative=24790')
|
'&linkCode=ur2&camp=3626&creative=24790')
|
||||||
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
author_article = 'de '
|
author_article = 'de '
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
@ -21,4 +22,4 @@ class AmazonITKindleStore(AmazonUKKindleStore):
|
|||||||
'linkCode=ur2&camp=3370&creative=23322')
|
'linkCode=ur2&camp=3370&creative=23322')
|
||||||
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
author_article = 'di '
|
author_article = 'di '
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
@ -135,11 +136,11 @@ class AmazonKindleStore(StorePlugin):
|
|||||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]/text()'
|
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]/text()'
|
||||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
for data in doc.xpath(data_xpath):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
# Even though we are searching digital-text only Amazon will still
|
||||||
# put in results for non Kindle books (author pages). Se we need
|
# put in results for non Kindle books (author pages). Se we need
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
# to explicitly check if the item is a Kindle book and ignore it
|
||||||
@ -147,7 +148,7 @@ class AmazonKindleStore(StorePlugin):
|
|||||||
format = ''.join(data.xpath(format_xpath))
|
format = ''.join(data.xpath(format_xpath))
|
||||||
if 'kindle' not in format.lower():
|
if 'kindle' not in format.lower():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
# We must have an asin otherwise we can't easily reference the
|
||||||
# book later.
|
# book later.
|
||||||
asin_href = None
|
asin_href = None
|
||||||
@ -161,7 +162,7 @@ class AmazonKindleStore(StorePlugin):
|
|||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
cover_url = ''.join(data.xpath(cover_xpath))
|
||||||
|
|
||||||
title = ''.join(data.xpath(title_xpath))
|
title = ''.join(data.xpath(title_xpath))
|
||||||
@ -172,9 +173,9 @@ class AmazonKindleStore(StorePlugin):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
price = ''.join(data.xpath(price_xpath))
|
price = ''.join(data.xpath(price_xpath))
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
|
|
||||||
s = SearchResult()
|
s = SearchResult()
|
||||||
s.cover_url = cover_url.strip()
|
s.cover_url = cover_url.strip()
|
||||||
s.title = title.strip()
|
s.title = title.strip()
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
@ -22,7 +23,7 @@ from calibre.gui2.store.search_result import SearchResult
|
|||||||
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||||
|
|
||||||
class BaenWebScriptionStore(BasicStoreConfig, StorePlugin):
|
class BaenWebScriptionStore(BasicStoreConfig, StorePlugin):
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
def open(self, parent=None, detail_item=None, external=False):
|
||||||
url = 'http://www.baenebooks.com/'
|
url = 'http://www.baenebooks.com/'
|
||||||
|
|
||||||
@ -41,26 +42,26 @@ class BaenWebScriptionStore(BasicStoreConfig, StorePlugin):
|
|||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
def search(self, query, max_results=10, timeout=60):
|
||||||
url = 'http://www.baenebooks.com/searchadv.aspx?IsSubmit=true&SearchTerm=' + urllib2.quote(query)
|
url = 'http://www.baenebooks.com/searchadv.aspx?IsSubmit=true&SearchTerm=' + urllib2.quote(query)
|
||||||
|
|
||||||
br = browser()
|
br = browser()
|
||||||
|
|
||||||
counter = max_results
|
counter = max_results
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
with closing(br.open(url, timeout=timeout)) as f:
|
||||||
doc = html.fromstring(f.read())
|
doc = html.fromstring(f.read())
|
||||||
for data in doc.xpath('//table//table//table//table//tr'):
|
for data in doc.xpath('//table//table//table//table//tr'):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
id = ''.join(data.xpath('./td[1]/a/@href'))
|
id = ''.join(data.xpath('./td[1]/a/@href'))
|
||||||
if not id or not id.startswith('p-'):
|
if not id or not id.startswith('p-'):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
title = ''.join(data.xpath('./td[1]/a/text()'))
|
title = ''.join(data.xpath('./td[1]/a/text()'))
|
||||||
|
|
||||||
author = ''
|
author = ''
|
||||||
cover_url = ''
|
cover_url = ''
|
||||||
price = ''
|
price = ''
|
||||||
|
|
||||||
with closing(br.open('http://www.baenebooks.com/' + id.strip(), timeout=timeout/4)) as nf:
|
with closing(br.open('http://www.baenebooks.com/' + id.strip(), timeout=timeout/4)) as nf:
|
||||||
idata = html.fromstring(nf.read())
|
idata = html.fromstring(nf.read())
|
||||||
author = ''.join(idata.xpath('//span[@class="ProductNameText"]/../b/text()'))
|
author = ''.join(idata.xpath('//span[@class="ProductNameText"]/../b/text()'))
|
||||||
@ -68,16 +69,16 @@ class BaenWebScriptionStore(BasicStoreConfig, StorePlugin):
|
|||||||
price = ''.join(idata.xpath('//span[@class="variantprice"]/text()'))
|
price = ''.join(idata.xpath('//span[@class="variantprice"]/text()'))
|
||||||
a, b, price = price.partition('$')
|
a, b, price = price.partition('$')
|
||||||
price = b + price
|
price = b + price
|
||||||
|
|
||||||
pnum = ''
|
pnum = ''
|
||||||
mo = re.search(r'p-(?P<num>\d+)-', id.strip())
|
mo = re.search(r'p-(?P<num>\d+)-', id.strip())
|
||||||
if mo:
|
if mo:
|
||||||
pnum = mo.group('num')
|
pnum = mo.group('num')
|
||||||
if pnum:
|
if pnum:
|
||||||
cover_url = 'http://www.baenebooks.com/' + ''.join(idata.xpath('//img[@id="ProductPic%s"]/@src' % pnum))
|
cover_url = 'http://www.baenebooks.com/' + ''.join(idata.xpath('//img[@id="ProductPic%s"]/@src' % pnum))
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
|
|
||||||
s = SearchResult()
|
s = SearchResult()
|
||||||
s.cover_url = cover_url
|
s.cover_url = cover_url
|
||||||
s.title = title.strip()
|
s.title = title.strip()
|
||||||
@ -86,5 +87,5 @@ class BaenWebScriptionStore(BasicStoreConfig, StorePlugin):
|
|||||||
s.detail_item = id.strip()
|
s.detail_item = id.strip()
|
||||||
s.drm = SearchResult.DRM_UNLOCKED
|
s.drm = SearchResult.DRM_UNLOCKED
|
||||||
s.formats = 'RB, MOBI, EPUB, LIT, LRF, RTF, HTML'
|
s.formats = 'RB, MOBI, EPUB, LIT, LRF, RTF, HTML'
|
||||||
|
|
||||||
yield s
|
yield s
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
@ -71,7 +72,7 @@ class BeWriteStore(BasicStoreConfig, StorePlugin):
|
|||||||
|
|
||||||
with closing(br.open(search_result.detail_item, timeout=timeout)) as nf:
|
with closing(br.open(search_result.detail_item, timeout=timeout)) as nf:
|
||||||
idata = html.fromstring(nf.read())
|
idata = html.fromstring(nf.read())
|
||||||
|
|
||||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "ePub")]/text()'))
|
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "ePub")]/text()'))
|
||||||
if not price:
|
if not price:
|
||||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "MOBI")]/text()'))
|
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "MOBI")]/text()'))
|
||||||
@ -79,7 +80,7 @@ class BeWriteStore(BasicStoreConfig, StorePlugin):
|
|||||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "PDF")]/text()'))
|
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "PDF")]/text()'))
|
||||||
price = '$' + price.split('$')[-1]
|
price = '$' + price.split('$')[-1]
|
||||||
search_result.price = price.strip()
|
search_result.price = price.strip()
|
||||||
|
|
||||||
cover_img = idata.xpath('//div[@id="content"]//img/@src')
|
cover_img = idata.xpath('//div[@id="content"]//img/@src')
|
||||||
if cover_img:
|
if cover_img:
|
||||||
for i in cover_img:
|
for i in cover_img:
|
||||||
@ -87,7 +88,7 @@ class BeWriteStore(BasicStoreConfig, StorePlugin):
|
|||||||
cover_url = 'http://www.bewrite.net/mm5/' + i
|
cover_url = 'http://www.bewrite.net/mm5/' + i
|
||||||
search_result.cover_url = cover_url.strip()
|
search_result.cover_url = cover_url.strip()
|
||||||
break
|
break
|
||||||
|
|
||||||
formats = set([])
|
formats = set([])
|
||||||
if idata.xpath('boolean(//div[@id="content"]//td[contains(text(), "ePub")])'):
|
if idata.xpath('boolean(//div[@id="content"]//td[contains(text(), "ePub")])'):
|
||||||
formats.add('EPUB')
|
formats.add('EPUB')
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2012, Alex Stanev <alex@stanev.org>'
|
__copyright__ = '2012, Alex Stanev <alex@stanev.org>'
|
||||||
@ -26,7 +27,7 @@ class BiblioStore(BasicStoreConfig, OpenSearchOPDSStore):
|
|||||||
|
|
||||||
for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
|
for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
|
||||||
yield s
|
yield s
|
||||||
|
|
||||||
def get_details(self, search_result, timeout):
|
def get_details(self, search_result, timeout):
|
||||||
# get format and DRM status
|
# get format and DRM status
|
||||||
from calibre import browser
|
from calibre import browser
|
||||||
@ -39,13 +40,13 @@ class BiblioStore(BasicStoreConfig, OpenSearchOPDSStore):
|
|||||||
search_result.formats = ''
|
search_result.formats = ''
|
||||||
if idata.xpath('.//span[@class="format epub"]'):
|
if idata.xpath('.//span[@class="format epub"]'):
|
||||||
search_result.formats = 'EPUB'
|
search_result.formats = 'EPUB'
|
||||||
|
|
||||||
if idata.xpath('.//span[@class="format pdf"]'):
|
if idata.xpath('.//span[@class="format pdf"]'):
|
||||||
if search_result.formats == '':
|
if search_result.formats == '':
|
||||||
search_result.formats = 'PDF'
|
search_result.formats = 'PDF'
|
||||||
else:
|
else:
|
||||||
search_result.formats.join(', PDF')
|
search_result.formats.join(', PDF')
|
||||||
|
|
||||||
if idata.xpath('.//span[@class="format nodrm-icon"]'):
|
if idata.xpath('.//span[@class="format nodrm-icon"]'):
|
||||||
search_result.drm = SearchResult.DRM_UNLOCKED
|
search_result.drm = SearchResult.DRM_UNLOCKED
|
||||||
else:
|
else:
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, Tomasz Długosz <tomek3d@gmail.com>'
|
__copyright__ = '2011, Tomasz Długosz <tomek3d@gmail.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, Alex Stanev <alex@stanev.org>'
|
__copyright__ = '2011, Alex Stanev <alex@stanev.org>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011-2012, Tomasz Długosz <tomek3d@gmail.com>'
|
__copyright__ = '2011-2012, Tomasz Długosz <tomek3d@gmail.com>'
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
store_version = 1 # Needed for dynamic plugin loading
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user