mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
0.9.32
This commit is contained in:
commit
2a6da5b204
@ -20,6 +20,56 @@
|
||||
# new recipes:
|
||||
# - title:
|
||||
|
||||
- version: 0.9.32
|
||||
date: 2013-05-24
|
||||
|
||||
new features:
|
||||
- title: "Show the number of currently selected books in the status bar at the bottom of the book list"
|
||||
|
||||
- title: "Driver for PocketBook Touch 623 and Yarvik tablet Xenta 13c"
|
||||
tickets: [1182850, 1181669]
|
||||
|
||||
- title: "When editing dates such as published, allow pressing the minus key to clear the date and the = key to set the date to today."
|
||||
tickets: [1181449]
|
||||
|
||||
bug fixes:
|
||||
- title: "EPUB/AZW3 Output: Fix regression that caused erros when trying to convert documents that have URLs with invalid (non-utf-8) quoting."
|
||||
tickets: [1181049]
|
||||
|
||||
- title: "When backing up metadata automatically remove XML invalid chars, instead of erroring out"
|
||||
|
||||
- title: "ebook-viewer: Fix --debug-javascript option causing an error when running from a binary build on os x and linux"
|
||||
|
||||
- title: "Fix switch library dialog and menu both popping up when clicking the library button in some window managers"
|
||||
|
||||
- title: "Apple driver: Fix a regression in 0.9.31 that could cause sending books to the device to hang"
|
||||
|
||||
- title: "When setting metadata using the edit metadata dialog, convert newlines, tabs etc. to normal spaces"
|
||||
tickets: [1182268]
|
||||
|
||||
- title: "EPUB/AZW3 Output: Fix pages that contain only an svg image being regarded as empty and removed during splitting"
|
||||
|
||||
- title: "AZW3 Input: Handle files that use unnecessary svg: prefixes."
|
||||
tickets: [1182257]
|
||||
|
||||
- title: "EPUB Input: Handle EPUB files that have no <metadata> section in their OPF."
|
||||
tickets: [1181546]
|
||||
|
||||
- title: "Get Books: Fix Foyles UK store plugin."
|
||||
tickets: [1181494]
|
||||
|
||||
improved recipes:
|
||||
- Wall Street Journal
|
||||
- Various Polish news sources
|
||||
- Handelsblatt
|
||||
- The Australian
|
||||
- Las Vegas Review
|
||||
- NME
|
||||
|
||||
new recipes:
|
||||
- title: WirtschaftsWoche Online
|
||||
author: Hegi
|
||||
|
||||
|
||||
- version: 0.9.31
|
||||
date: 2013-05-17
|
||||
@ -47,8 +97,6 @@
|
||||
|
||||
- title: "Search and replace wizard: Fix generated html being slightly different from the actual html in the conversion pipeline for some input formats (mainly HTML, CHM, LIT)."
|
||||
|
||||
- title: "Nook Color/Touch driver: Scan for ebooks in the entire main memory, not just under My Files"
|
||||
|
||||
|
||||
improved recipes:
|
||||
- Weblogs SL
|
||||
|
@ -504,6 +504,31 @@ There is a search bar at the top of the Tag Browser that allows you to easily fi
|
||||
|
||||
You can control how items are sorted in the Tag browser via the box at the bottom of the Tag Browser. You can choose to sort by name, average rating or popularity (popularity is the number of books with an item in your library; for example, the popularity of Isaac Asimov is the number of books in your library by Isaac Asimov).
|
||||
|
||||
Quickview
|
||||
----------
|
||||
|
||||
Sometimes you want to to select a book and quickly get a list of books with the same value in some category (authors, tags, publisher, series, etc) as the currently selected book, but without changing the current view of the library. You can do this with Quickview. Quickview opens a second window showing the list of books matching the value of interest.
|
||||
|
||||
For example, assume you want to see a list of all the books with the same author of the currently-selected book. Click in the author cell you are interested in and press the 'Q' key. A window will open with all the authors for that book on the left, and all the books by the selected author on the right.
|
||||
|
||||
Some example Quickview usages: quickly seeing what other books:
|
||||
- have some tag that is applied to the currently selected book,
|
||||
- are in the same series as the current book
|
||||
- have the same values in a custom column as the current book
|
||||
- are written by one of the same authors of the current book
|
||||
|
||||
without changing the contents of the library view.
|
||||
|
||||
The Quickview window opens on top of the |app| window and will stay open until you explicitly close it. You can use Quickview and the |app| library view at the same time. For example, if in the |app| library view you click on a category column (tags, series, publisher, authors, etc) for a book, the Quickview window contents will change to show you in the left-hand side pane the items in that category for the selected book (e.g., the tags for that book). The first item in that list will be selected, and Quickview will show you on the right-hand side pane all the books in your library that reference that item. Click on an different item in the left-hand pane to see the books with that different item.
|
||||
|
||||
Double-click on a book in the Quickview window to select that book in the library view. This will also change the items display in the QuickView window(the left-hand pane) to show the items in the newly-selected book.
|
||||
|
||||
Shift- (or Ctrl-) double-click on a book in the Quickview window to open the edit metadata dialog on that book in the |app| window.
|
||||
|
||||
You can see if a column can be Quickview'ed by hovering your mouse over the column heading and looking at the tooltip for that heading. You can also know by right-clicking on the column heading to see of the "Quickview" option is shown in the menu, in which case choosing that Quickview option is equivalent to pressing 'Q' in the current cell.
|
||||
|
||||
Quickview respects the virtual library setting, showing only books in the current virtual library.
|
||||
|
||||
Jobs
|
||||
-----
|
||||
.. image:: images/jobs.png
|
||||
|
@ -57,6 +57,26 @@ library. The virtual library will then be created based on the search
|
||||
you just typed in. Searches are very powerful, for examples of the kinds
|
||||
of things you can do with them, see :ref:`search_interface`.
|
||||
|
||||
Examples of useful Virtual Libraries
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* Books added to |app| in the last day::
|
||||
date:>1daysago
|
||||
* Books added to |app| in the last month::
|
||||
date:>30daysago
|
||||
* Books with a rating of 5 stars::
|
||||
rating:5
|
||||
* Books with a rating of at least 4 stars::
|
||||
rating:>=4
|
||||
* Books with no rating::
|
||||
rating:false
|
||||
* Periodicals downloaded by the Fetch News function in |app|::
|
||||
tags:=News and author:=calibre
|
||||
* Books with no tags::
|
||||
tags:false
|
||||
* Books with no covers::
|
||||
cover:false
|
||||
|
||||
Working with Virtual Libraries
|
||||
-------------------------------------
|
||||
|
||||
|
@ -1,47 +1,24 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
import re
|
||||
class Adventure_zone(BasicNewsRecipe):
|
||||
title = u'Adventure Zone'
|
||||
__author__ = 'fenuks'
|
||||
description = u'Czytaj więcej o przygodzie - codzienne nowinki. Szukaj u nas solucji i poradników, czytaj recenzje i zapowiedzi. Także galeria, pliki oraz forum dla wszystkich fanów gier przygodowych.'
|
||||
category = 'games'
|
||||
language = 'pl'
|
||||
BASEURL = 'http://www.adventure-zone.info/fusion/'
|
||||
no_stylesheets = True
|
||||
extra_css = '.image {float: left; margin-right: 5px;}'
|
||||
oldest_article = 20
|
||||
max_articles_per_feed = 100
|
||||
cover_url = 'http://www.adventure-zone.info/inne/logoaz_2012.png'
|
||||
index = 'http://www.adventure-zone.info/fusion/'
|
||||
remove_attributes = ['style']
|
||||
use_embedded_content = False
|
||||
preprocess_regexps = [(re.compile(r"<td class='capmain'>Komentarze</td>", re.IGNORECASE), lambda m: ''),
|
||||
(re.compile(r'</?table.*?>'), lambda match: ''),
|
||||
(re.compile(r'</?tbody.*?>'), lambda match: '')]
|
||||
remove_tags_before = dict(name='td', attrs={'class':'main-bg'})
|
||||
remove_tags = [dict(name='img', attrs={'alt':'Drukuj'})]
|
||||
remove_tags_after = dict(id='comments')
|
||||
extra_css = '.main-bg{text-align: left;} td.capmain{ font-size: 22px; } img.news-category {float: left; margin-right: 5px;}'
|
||||
feeds = [(u'Nowinki', u'http://www.adventure-zone.info/fusion/feeds/news.php')]
|
||||
|
||||
'''def get_cover_url(self):
|
||||
soup = self.index_to_soup('http://www.adventure-zone.info/fusion/news.php')
|
||||
cover=soup.find(id='box_OstatninumerAZ')
|
||||
self.cover_url='http://www.adventure-zone.info/fusion/'+ cover.center.a.img['src']
|
||||
return getattr(self, 'cover_url', self.cover_url)'''
|
||||
|
||||
def populate_article_metadata(self, article, soup, first):
|
||||
result = re.search('(.+) - Adventure Zone', soup.title.string)
|
||||
if result:
|
||||
result = result.group(1)
|
||||
else:
|
||||
result = soup.body.find('strong')
|
||||
if result:
|
||||
result = result.string
|
||||
if result:
|
||||
result = result.replace('&', '&')
|
||||
result = result.replace(''', '’')
|
||||
article.title = result
|
||||
keep_only_tags = [dict(attrs={'class':'content'})]
|
||||
remove_tags = [dict(attrs={'class':'footer'})]
|
||||
feeds = [(u'Nowinki', u'http://www.adventure-zone.info/fusion/rss/index.php')]
|
||||
|
||||
def skip_ad_pages(self, soup):
|
||||
skip_tag = soup.body.find(name='td', attrs={'class':'main-bg'})
|
||||
skip_tag = soup.body.find(attrs={'class':'content'})
|
||||
skip_tag = skip_tag.findAll(name='a')
|
||||
title = soup.title.string.lower()
|
||||
if (('zapowied' in title) or ('recenzj' in title) or ('solucj' in title) or ('poradnik' in title)):
|
||||
@ -49,20 +26,10 @@ class Adventure_zone(BasicNewsRecipe):
|
||||
if r.strong and r.strong.string:
|
||||
word=r.strong.string.lower()
|
||||
if (('zapowied' in word) or ('recenzj' in word) or ('solucj' in word) or ('poradnik' in word)):
|
||||
return self.index_to_soup('http://www.adventure-zone.info/fusion/print.php?type=A&item'+r['href'][r['href'].find('article_id')+7:], raw=True)
|
||||
return self.index_to_soup(self.BASEURL+r['href'], raw=True)
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
footer=soup.find(attrs={'class':'news-footer middle-border'})
|
||||
r = soup.find(name='td', attrs={'class':'capmain'})
|
||||
if r:
|
||||
r.name='h1'
|
||||
for item in soup.findAll(name=['tr', 'td']):
|
||||
item.name='div'
|
||||
if footer and len(footer('a'))>=2:
|
||||
footer('a')[1].extract()
|
||||
for item in soup.findAll(style=True):
|
||||
del item['style']
|
||||
for a in soup('a'):
|
||||
if a.has_key('href') and 'http://' not in a['href'] and 'https://' not in a['href']:
|
||||
a['href']=self.index + a['href']
|
||||
for link in soup.findAll('a', href=True):
|
||||
if not link['href'].startswith('http'):
|
||||
link['href'] = self.BASEURL + link['href']
|
||||
return soup
|
||||
|
@ -13,6 +13,7 @@ class Astroflesz(BasicNewsRecipe):
|
||||
max_articles_per_feed = 100
|
||||
no_stylesheets = True
|
||||
use_embedded_content = False
|
||||
remove_empty_feeds = True
|
||||
remove_attributes = ['style']
|
||||
keep_only_tags = [dict(id="k2Container")]
|
||||
remove_tags_after = dict(name='div', attrs={'class':'itemLinks'})
|
||||
|
@ -6,12 +6,10 @@ __copyright__ = '2011, Piotr Kontek, piotr.kontek@gmail.com \
|
||||
2013, Tomasz Długosz, tomek3d@gmail.com'
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.ptempfile import PersistentTemporaryFile
|
||||
from datetime import date
|
||||
import re
|
||||
from lxml import html
|
||||
|
||||
class GN(BasicNewsRecipe):
|
||||
EDITION = 0
|
||||
|
||||
__author__ = 'Piotr Kontek, Tomasz Długosz'
|
||||
title = u'Gość Niedzielny'
|
||||
@ -20,83 +18,23 @@ class GN(BasicNewsRecipe):
|
||||
no_stylesheets = True
|
||||
language = 'pl'
|
||||
remove_javascript = True
|
||||
temp_files = []
|
||||
|
||||
articles_are_obfuscated = True
|
||||
def find_last_issue(self):
|
||||
raw = self.index_to_soup('http://gosc.pl/wyszukaj/wydania/3.Gosc-Niedzielny/', raw=True)
|
||||
doc = html.fromstring(raw)
|
||||
page = doc.xpath('//div[@class="c"]//div[@class="search-result"]/div[1]/div[2]/h1//a/@href')
|
||||
|
||||
def get_obfuscated_article(self, url):
|
||||
br = self.get_browser()
|
||||
br.open(url)
|
||||
source = br.response().read()
|
||||
page = self.index_to_soup(source)
|
||||
|
||||
main_section = page.find('div',attrs={'class':'txt doc_prnt_prv'})
|
||||
|
||||
title = main_section.find('h2')
|
||||
info = main_section.find('div', attrs={'class' : 'cf doc_info'})
|
||||
authors = info.find(attrs={'class':'l'})
|
||||
article = str(main_section.find('p', attrs={'class' : 'doc_lead'}))
|
||||
first = True
|
||||
for p in main_section.findAll('p', attrs={'class':None}, recursive=False):
|
||||
if first and p.find('img') != None:
|
||||
article += '<p>'
|
||||
article += str(p.find('img')).replace('src="/files/','src="http://www.gosc.pl/files/')
|
||||
article += '<font size="-2">'
|
||||
for s in p.findAll('span'):
|
||||
article += self.tag_to_string(s)
|
||||
article += '</font></p>'
|
||||
else:
|
||||
article += str(p).replace('src="/files/','src="http://www.gosc.pl/files/')
|
||||
first = False
|
||||
limiter = main_section.find('p', attrs={'class' : 'limiter'})
|
||||
if limiter:
|
||||
article += str(limiter)
|
||||
|
||||
html = unicode(title)
|
||||
#sometimes authors are not filled in:
|
||||
if authors:
|
||||
html += unicode(authors) + unicode(article)
|
||||
else:
|
||||
html += unicode(article)
|
||||
|
||||
self.temp_files.append(PersistentTemporaryFile('_temparse.html'))
|
||||
self.temp_files[-1].write(html)
|
||||
self.temp_files[-1].close()
|
||||
return self.temp_files[-1].name
|
||||
|
||||
def find_last_issue(self, year):
|
||||
soup = self.index_to_soup('http://gosc.pl/wyszukaj/wydania/3.Gosc-Niedzielny/rok/' + str(year))
|
||||
|
||||
#szukam zdjęcia i linka do poprzedniego pełnego numeru
|
||||
first = True
|
||||
for d in soup.findAll('div', attrs={'class':'l release_preview_l'}):
|
||||
img = d.find('img')
|
||||
if img != None:
|
||||
a = img.parent
|
||||
self.EDITION = a['href']
|
||||
#this was preventing kindles from moving old issues to 'Back Issues' category:
|
||||
#self.title = img['alt']
|
||||
self.cover_url = 'http://www.gosc.pl' + img['src']
|
||||
if year != date.today().year or not first:
|
||||
break
|
||||
first = False
|
||||
return page[1]
|
||||
|
||||
def parse_index(self):
|
||||
year = date.today().year
|
||||
self.find_last_issue(year)
|
||||
##jeśli to pierwszy numer w roku trzeba pobrać poprzedni rok
|
||||
if self.EDITION == 0:
|
||||
self.find_last_issue(year-1)
|
||||
soup = self.index_to_soup('http://www.gosc.pl' + self.EDITION)
|
||||
soup = self.index_to_soup('http://gosc.pl' + self.find_last_issue())
|
||||
feeds = []
|
||||
#wstepniak
|
||||
a = soup.find('div',attrs={'class':'release-wp-b'}).find('a')
|
||||
articles = [
|
||||
{'title' : self.tag_to_string(a),
|
||||
'url' : 'http://www.gosc.pl' + a['href'].replace('/doc/','/doc_pr/'),
|
||||
'date' : '',
|
||||
'description' : ''}
|
||||
]
|
||||
'url' : 'http://www.gosc.pl' + a['href'].replace('/doc/','/doc_pr/')
|
||||
}]
|
||||
feeds.append((u'Wstępniak',articles))
|
||||
#kategorie
|
||||
for addr in soup.findAll('a',attrs={'href':re.compile('kategoria')}):
|
||||
@ -113,16 +51,46 @@ class GN(BasicNewsRecipe):
|
||||
art = a.find('a')
|
||||
yield {
|
||||
'title' : self.tag_to_string(art),
|
||||
'url' : 'http://www.gosc.pl' + art['href'].replace('/doc/','/doc_pr/'),
|
||||
'date' : '',
|
||||
'description' : ''
|
||||
'url' : 'http://www.gosc.pl' + art['href']
|
||||
}
|
||||
for a in main_block.findAll('div', attrs={'class':'sr-document'}):
|
||||
art = a.find('a')
|
||||
yield {
|
||||
'title' : self.tag_to_string(art),
|
||||
'url' : 'http://www.gosc.pl' + art['href'].replace('/doc/','/doc_pr/'),
|
||||
'date' : '',
|
||||
'description' : ''
|
||||
'url' : 'http://www.gosc.pl' + art['href']
|
||||
}
|
||||
|
||||
def append_page(self, soup, appendtag):
|
||||
chpage= appendtag.find(attrs={'class':'pgr_nrs'})
|
||||
if chpage:
|
||||
for page in chpage.findAll('a'):
|
||||
soup2 = self.index_to_soup('http://gosc.pl' + page['href'])
|
||||
pagetext = soup2.find(attrs={'class':'intextAd'})
|
||||
pos = len(appendtag.contents)
|
||||
appendtag.insert(pos, pagetext)
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
self.append_page(soup, soup.body)
|
||||
'''
|
||||
for image_div in soup.findAll(attrs={'class':'doc_image'}):
|
||||
link =
|
||||
if 'm.jpg' in image['src']:
|
||||
image['src'] = image['src'].replace('m.jpg', '.jpg')
|
||||
'''
|
||||
return soup
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'class':'cf txt'})
|
||||
]
|
||||
|
||||
remove_tags = [
|
||||
dict(name='p', attrs={'class':['r tr', 'l l-2', 'wykop']}),
|
||||
dict(name='div', attrs={'class':['doc_actions', 'pgr', 'fr1_cl']}),
|
||||
dict(name='div', attrs={'id':'vote'})
|
||||
]
|
||||
|
||||
extra_css = '''
|
||||
h1 {font-size:150%}
|
||||
div#doc_image {font-style:italic; font-size:70%}
|
||||
p.limiter {font-size:150%; font-weight: bold}
|
||||
'''
|
||||
|
@ -1,16 +1,61 @@
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class Handelsblatt(BasicNewsRecipe):
|
||||
title = u'Handelsblatt'
|
||||
__author__ = 'malfi'
|
||||
oldest_article = 7
|
||||
__author__ = 'malfi' # modified by Hegi, last change 2013-05-20
|
||||
description = u'Handelsblatt - basierend auf den RRS-Feeds von Handelsblatt.de'
|
||||
tags = 'Nachrichten, Blog, Wirtschaft'
|
||||
publisher = 'Verlagsgruppe Handelsblatt GmbH'
|
||||
category = 'business, economy, news, Germany'
|
||||
publication_type = 'daily newspaper'
|
||||
language = 'de_DE'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 100
|
||||
no_stylesheets = True
|
||||
# cover_url = 'http://www.handelsblatt.com/images/logo/logo_handelsblatt.com.png'
|
||||
language = 'de'
|
||||
simultaneous_downloads= 20
|
||||
|
||||
remove_tags_before = dict(attrs={'class':'hcf-overline'})
|
||||
remove_tags_after = dict(attrs={'class':'hcf-footer'})
|
||||
auto_cleanup = False
|
||||
no_stylesheets = True
|
||||
remove_javascript = True
|
||||
remove_empty_feeds = True
|
||||
|
||||
# don't duplicate articles from "Schlagzeilen" / "Exklusiv" to other rubrics
|
||||
ignore_duplicate_articles = {'title', 'url'}
|
||||
|
||||
# if you want to reduce size for an b/w or E-ink device, uncomment this:
|
||||
# compress_news_images = True
|
||||
# compress_news_images_auto_size = 16
|
||||
# scale_news_images = (400,300)
|
||||
|
||||
timefmt = ' [%a, %d %b %Y]'
|
||||
|
||||
conversion_options = {'smarten_punctuation' : True,
|
||||
'authors' : publisher,
|
||||
'publisher' : publisher}
|
||||
language = 'de_DE'
|
||||
encoding = 'UTF-8'
|
||||
|
||||
cover_source = 'http://www.handelsblatt-shop.com/epaper/482/'
|
||||
# masthead_url = 'http://www.handelsblatt.com/images/hb_logo/6543086/1-format3.jpg'
|
||||
masthead_url = 'http://www.handelsblatt-chemie.de/wp-content/uploads/2012/01/hb-logo.gif'
|
||||
|
||||
def get_cover_url(self):
|
||||
cover_source_soup = self.index_to_soup(self.cover_source)
|
||||
preview_image_div = cover_source_soup.find(attrs={'class':'vorschau'})
|
||||
return 'http://www.handelsblatt-shop.com'+preview_image_div.a.img['src']
|
||||
|
||||
# remove_tags_before = dict(attrs={'class':'hcf-overline'})
|
||||
# remove_tags_after = dict(attrs={'class':'hcf-footer'})
|
||||
# Alternatively use this:
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'class':['hcf-column hcf-column1 hcf-teasercontainer hcf-maincol']}),
|
||||
dict(name='div', attrs={'id':['contentMain']})
|
||||
]
|
||||
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'class':['hcf-link-block hcf-faq-open', 'hcf-article-related']})
|
||||
]
|
||||
|
||||
feeds = [
|
||||
(u'Handelsblatt Exklusiv',u'http://www.handelsblatt.com/rss/exklusiv'),
|
||||
@ -25,15 +70,19 @@ class Handelsblatt(BasicNewsRecipe):
|
||||
(u'Handelsblatt Weblogs',u'http://www.handelsblatt.com/rss/blogs')
|
||||
]
|
||||
|
||||
extra_css = '''
|
||||
h1{font-family:Arial,Helvetica,sans-serif; font-weight:bold;font-size:large;}
|
||||
h2{font-family:Arial,Helvetica,sans-serif; font-weight:normal;font-size:small;}
|
||||
p{font-family:Arial,Helvetica,sans-serif;font-size:small;}
|
||||
body{font-family:Helvetica,Arial,sans-serif;font-size:small;}
|
||||
'''
|
||||
# Insert ". " after "Place" in <span class="hcf-location-mark">Place</span>
|
||||
# If you use .epub format you could also do this as extra_css '.hcf-location-mark:after {content: ". "}'
|
||||
preprocess_regexps = [(re.compile(r'(<span class="hcf-location-mark">[^<]*)(</span>)',
|
||||
re.DOTALL|re.IGNORECASE), lambda match: match.group(1) + '. ' + match.group(2))]
|
||||
|
||||
extra_css = 'h1 {font-size: 1.6em; text-align: left} \
|
||||
h2 {font-size: 1em; font-style: italic; font-weight: normal} \
|
||||
h3 {font-size: 1.3em;text-align: left} \
|
||||
h4, h5, h6, a {font-size: 1em;text-align: left} \
|
||||
.hcf-caption {font-size: 1em;text-align: left; font-style: italic} \
|
||||
.hcf-location-mark {font-style: italic}'
|
||||
|
||||
def print_version(self, url):
|
||||
url = url.split('/')
|
||||
url[-1] = 'v_detail_tab_print,'+url[-1]
|
||||
url = '/'.join(url)
|
||||
return url
|
||||
main, sep, id = url.rpartition('/')
|
||||
return main + '/v_detail_tab_print/' + id
|
||||
|
||||
|
@ -13,11 +13,12 @@ class Histmag(BasicNewsRecipe):
|
||||
__author__ = 'matek09'
|
||||
description = u"Artykuly historyczne i publicystyczne"
|
||||
encoding = 'utf-8'
|
||||
extra_css = '''.center img {display: block;}'''
|
||||
#preprocess_regexps = [(re.compile(r'</span>'), lambda match: '</span><br><br>'),(re.compile(r'<span>'), lambda match: '<br><br><span>')]
|
||||
no_stylesheets = True
|
||||
language = 'pl'
|
||||
remove_javascript = True
|
||||
keep_only_tags=[dict(id='article')]
|
||||
remove_tags=[dict(name = 'p', attrs = {'class' : 'article-tags'})]
|
||||
remove_tags=[dict(name = 'p', attrs = {'class' : 'article-tags'}), dict(attrs={'class':'twitter-share-button'})]
|
||||
|
||||
feeds = [(u'Wszystkie', u'http://histmag.org/rss/wszystkie.xml'), (u'Wydarzenia', u'http://histmag.org/rss/wydarzenia.xml'), (u'Recenzje', u'http://histmag.org/rss/recenzje.xml'), (u'Artykuły historyczne', u'http://histmag.org/rss/historia.xml'), (u'Publicystyka', u'http://histmag.org/rss/publicystyka.xml')]
|
||||
|
BIN
recipes/icons/geopolityka.png
Normal file
BIN
recipes/icons/geopolityka.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.5 KiB |
BIN
recipes/icons/gs24_pl.png
Normal file
BIN
recipes/icons/gs24_pl.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 428 B |
BIN
recipes/icons/homopedia_pl.png
Normal file
BIN
recipes/icons/homopedia_pl.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 541 B |
BIN
recipes/icons/pc_lab.png
Normal file
BIN
recipes/icons/pc_lab.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 697 B |
BIN
recipes/icons/polityka.png
Normal file
BIN
recipes/icons/polityka.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 346 B |
BIN
recipes/icons/rynek_zdrowia.png
Normal file
BIN
recipes/icons/rynek_zdrowia.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 418 B |
@ -9,21 +9,24 @@ class AdvancedUserRecipe1274742400(BasicNewsRecipe):
|
||||
oldest_article = 7
|
||||
|
||||
max_articles_per_feed = 100
|
||||
keep_only_tags = [dict(id='content-main')]
|
||||
remove_tags = [dict(id=['right-col-content', 'trending-topics']),
|
||||
{'class':['ppy-outer']}
|
||||
]
|
||||
#keep_only_tags = [dict(id='content-main')]
|
||||
#remove_tags = [dict(id=['right-col-content', 'trending-topics']),
|
||||
#{'class':['ppy-outer']}
|
||||
#]
|
||||
no_stylesheets = True
|
||||
use_embedded_content = False
|
||||
auto_cleanup = True
|
||||
|
||||
|
||||
feeds = [
|
||||
(u'News', u'http://www.lvrj.com/news.rss'),
|
||||
(u'News', u'http://www.lvrj.com/news.rss'),
|
||||
(u'Business', u'http://www.lvrj.com/business.rss'),
|
||||
(u'Living', u'http://www.lvrj.com/living.rss'),
|
||||
(u'Opinion', u'http://www.lvrj.com/opinion.rss'),
|
||||
(u'Neon', u'http://www.lvrj.com/neon.rss'),
|
||||
(u'Image', u'http://www.lvrj.com/image.rss'),
|
||||
(u'Home & Garden', u'http://www.lvrj.com/home_and_garden.rss'),
|
||||
(u'Furniture & Design', u'http://www.lvrj.com/furniture_and_design.rss'),
|
||||
(u'Drive', u'http://www.lvrj.com/drive.rss'),
|
||||
(u'Real Estate', u'http://www.lvrj.com/real_estate.rss'),
|
||||
#(u'Image', u'http://www.lvrj.com/image.rss'),
|
||||
#(u'Home & Garden', u'http://www.lvrj.com/home_and_garden.rss'),
|
||||
#(u'Furniture & Design', u'http://www.lvrj.com/furniture_and_design.rss'),
|
||||
#(u'Drive', u'http://www.lvrj.com/drive.rss'),
|
||||
#(u'Real Estate', u'http://www.lvrj.com/real_estate.rss'),
|
||||
(u'Sports', u'http://www.lvrj.com/sports.rss')]
|
||||
|
@ -4,7 +4,7 @@ class AdvancedUserRecipe1306061239(BasicNewsRecipe):
|
||||
title = u'New Musical Express Magazine'
|
||||
description = 'Author D.Asbury. UK Rock & Pop Mag. '
|
||||
__author__ = 'Dave Asbury'
|
||||
# last updated 7/10/12
|
||||
# last updated 17/5/13 News feed url altered
|
||||
remove_empty_feeds = True
|
||||
remove_javascript = True
|
||||
no_stylesheets = True
|
||||
@ -13,62 +13,57 @@ class AdvancedUserRecipe1306061239(BasicNewsRecipe):
|
||||
#auto_cleanup = True
|
||||
language = 'en_GB'
|
||||
compress_news_images = True
|
||||
|
||||
def get_cover_url(self):
|
||||
soup = self.index_to_soup('http://www.nme.com/component/subscribe')
|
||||
cov = soup.find(attrs={'id' : 'magazine_cover'})
|
||||
|
||||
cov2 = str(cov['src'])
|
||||
# print '**** Cov url =*', cover_url,'***'
|
||||
#print '**** Cov url =*','http://www.magazinesdirect.com/article_images/articledir_3138/1569221/1_largelisting.jpg','***'
|
||||
|
||||
|
||||
br = browser()
|
||||
br.set_handle_redirect(False)
|
||||
try:
|
||||
br.open_novisit(cov2)
|
||||
cover_url = str(cov2)
|
||||
except:
|
||||
cover_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'
|
||||
cover_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'
|
||||
return cover_url
|
||||
|
||||
masthead_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'
|
||||
|
||||
remove_tags = [
|
||||
dict( attrs={'class':'clear_icons'}),
|
||||
dict( attrs={'class':'share_links'}),
|
||||
dict( attrs={'id':'right_panel'}),
|
||||
dict( attrs={'class':'today box'}),
|
||||
dict(attrs={'class':'clear_icons'}),
|
||||
dict(attrs={'class':'share_links'}),
|
||||
dict(attrs={'id':'right_panel'}),
|
||||
dict(attrs={'class':'today box'}),
|
||||
|
||||
|
||||
]
|
||||
|
||||
keep_only_tags = [
|
||||
|
||||
dict(name='h1'),
|
||||
#dict(name='h3'),
|
||||
dict(attrs={'class' : 'BText'}),
|
||||
dict(attrs={'class' : 'Bmore'}),
|
||||
dict(attrs={'class' : 'bPosts'}),
|
||||
dict(attrs={'class' : 'text'}),
|
||||
dict(attrs={'id' : 'article_gallery'}),
|
||||
#dict(attrs={'class' : 'image'}),
|
||||
dict(attrs={'class' : 'article_text'})
|
||||
|
||||
]
|
||||
|
||||
|
||||
|
||||
|
||||
feeds = [
|
||||
(u'NME News', u'http://feeds.feedburner.com/nmecom/rss/newsxml?format=xml'),
|
||||
#(u'Reviews', u'http://feeds2.feedburner.com/nme/SdML'),
|
||||
(u'Reviews',u'http://feed43.com/1817687144061333.xml'),
|
||||
(u'Bloggs',u'http://feed43.com/3326754333186048.xml'),
|
||||
dict(name='h1'),
|
||||
#dict(name='h3'),
|
||||
dict(attrs={'class' : 'BText'}),
|
||||
dict(attrs={'class' : 'Bmore'}),
|
||||
dict(attrs={'class' : 'bPosts'}),
|
||||
dict(attrs={'class' : 'text'}),
|
||||
dict(attrs={'id' : 'article_gallery'}),
|
||||
#dict(attrs={'class' : 'image'}),
|
||||
dict(attrs={'class' : 'article_text'})
|
||||
|
||||
]
|
||||
|
||||
feeds = [
|
||||
(u'NME News', u'http://www.nme.com/news?alt=rss' ), #http://feeds.feedburner.com/nmecom/rss/newsxml?format=xml'),
|
||||
#(u'Reviews', u'http://feeds2.feedburner.com/nme/SdML'),
|
||||
(u'Reviews',u'http://feed43.com/1817687144061333.xml'),
|
||||
(u'Bloggs',u'http://feed43.com/3326754333186048.xml'),
|
||||
|
||||
]
|
||||
extra_css = '''
|
||||
h1{font-family:Arial,Helvetica,sans-serif; font-weight:bold;font-size:large;}
|
||||
h2{font-family:Arial,Helvetica,sans-serif; font-weight:normal;font-size:small;}
|
||||
p{font-family:Arial,Helvetica,sans-serif;font-size:small;}
|
||||
body{font-family:Helvetica,Arial,sans-serif;font-size:small;}
|
||||
'''
|
||||
'''
|
||||
|
@ -20,7 +20,7 @@ class OSNewsRecipe(BasicNewsRecipe):
|
||||
remove_javascript = True
|
||||
encoding = 'utf-8'
|
||||
use_embedded_content = False;
|
||||
|
||||
remove_empty_feeds = True
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 100
|
||||
cover_url='http://osnews.pl/wp-content/themes/osnews/img/logo.png'
|
||||
@ -31,22 +31,18 @@ class OSNewsRecipe(BasicNewsRecipe):
|
||||
'''
|
||||
|
||||
feeds = [
|
||||
(u'OSNews.pl', u'http://feeds.feedburner.com/OSnewspl')
|
||||
(u'Niusy', u'http://feeds.feedburner.com/OSnewspl'),
|
||||
(u'Wylęgarnia', u'http://feeds.feedburner.com/osnewspl_nowe')
|
||||
]
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name = 'a', attrs = {'class' : 'news-heading'}),
|
||||
dict(name = 'div', attrs = {'class' : 'newsinformations'}),
|
||||
dict(name = 'div', attrs = {'id' : 'news-content'})
|
||||
dict(name = 'div', attrs = {'id' : 'content'})
|
||||
]
|
||||
|
||||
remove_tags = [
|
||||
dict(name = 'div', attrs = {'class' : 'sociable'}),
|
||||
dict(name = 'div', attrs = {'class' : 'post_prev'}),
|
||||
dict(name = 'div', attrs = {'class' : 'post_next'}),
|
||||
dict(name = 'div', attrs = {'class' : 'clr'}),
|
||||
dict(name = 'div', attrs = {'class' : 'tw_button'}),
|
||||
dict(name = 'div', attrs = {'style' : 'width:56px;height:60px;float:left;margin-right:10px'})
|
||||
dict(name = 'div', attrs = {'class' : ['newstags', 'tw_button', 'post_prev']}),
|
||||
dict(name = 'div', attrs = {'id' : 'newspage_upinfo'}),
|
||||
]
|
||||
|
||||
preprocess_regexps = [(re.compile(u'</span>Komentarze: \(?[0-9]+\)? ?<span'), lambda match: '</span><span')]
|
||||
remove_tags_after = dict(name = 'div', attrs = {'class' : 'post_prev'})
|
||||
preprocess_regexps = [(re.compile(u'</span>Komentarze: \(?[0-9]+\)? ?<span'), lambda match: '</span><span'), (re.compile(u'<iframe.+?</iframe>'), lambda match: '')]
|
||||
|
@ -26,14 +26,14 @@ class DailyTelegraph(BasicNewsRecipe):
|
||||
|
||||
keep_only_tags = [dict(name='div', attrs={'id': 'story'})]
|
||||
|
||||
#remove_tags = [dict(name=['object','link'])]
|
||||
remove_tags = [dict(name ='div', attrs = {'class': 'story-info'}),
|
||||
dict(name ='div', attrs = {'class': 'story-header-tools'}),
|
||||
dict(name ='div', attrs = {'class': 'story-sidebar'}),
|
||||
dict(name ='div', attrs = {'class': 'story-footer'}),
|
||||
dict(name ='div', attrs = {'id': 'comments'}),
|
||||
dict(name ='div', attrs = {'class': 'story-extras story-extras-2'}),
|
||||
dict(name ='div', attrs = {'class': 'group item-count-1 story-related'})
|
||||
# remove_tags = [dict(name=['object','link'])]
|
||||
remove_tags = [dict(name='div', attrs={'class': 'story-info'}),
|
||||
dict(name='div', attrs={'class': 'story-header-tools'}),
|
||||
dict(name='div', attrs={'class': 'story-sidebar'}),
|
||||
dict(name='div', attrs={'class': 'story-footer'}),
|
||||
dict(name='div', attrs={'id': 'comments'}),
|
||||
dict(name='div', attrs={'class': 'story-extras story-extras-2'}),
|
||||
dict(name='div', attrs={'class': 'group item-count-1 story-related'})
|
||||
]
|
||||
|
||||
extra_css = '''
|
||||
@ -45,30 +45,31 @@ class DailyTelegraph(BasicNewsRecipe):
|
||||
.caption{font-family:Trebuchet MS,Trebuchet,Helvetica,sans-serif; font-size: xx-small;}
|
||||
'''
|
||||
|
||||
feeds = [ (u'News', u'http://feeds.news.com.au/public/rss/2.0/aus_news_807.xml'),
|
||||
(u'Opinion', u'http://feeds.news.com.au/public/rss/2.0/aus_opinion_58.xml'),
|
||||
(u'The Nation', u'http://feeds.news.com.au/public/rss/2.0/aus_the_nation_62.xml'),
|
||||
(u'World News', u'http://feeds.news.com.au/public/rss/2.0/aus_world_808.xml'),
|
||||
(u'US Election', u'http://feeds.news.com.au/public/rss/2.0/aus_uselection_687.xml'),
|
||||
(u'Climate', u'http://feeds.news.com.au/public/rss/2.0/aus_climate_809.xml'),
|
||||
(u'Media', u'http://feeds.news.com.au/public/rss/2.0/aus_media_57.xml'),
|
||||
(u'IT', u'http://feeds.news.com.au/public/rss/2.0/ausit_itnews_topstories_367.xml'),
|
||||
(u'Exec Tech', u'http://feeds.news.com.au/public/rss/2.0/ausit_exec_topstories_385.xml'),
|
||||
(u'Higher Education', u'http://feeds.news.com.au/public/rss/2.0/aus_higher_education_56.xml'),
|
||||
(u'Arts', u'http://feeds.news.com.au/public/rss/2.0/aus_arts_51.xml'),
|
||||
(u'Travel', u'http://feeds.news.com.au/public/rss/2.0/aus_travel_and_indulgence_63.xml'),
|
||||
(u'Property', u'http://feeds.news.com.au/public/rss/2.0/aus_property_59.xml'),
|
||||
(u'Sport', u'http://feeds.news.com.au/public/rss/2.0/aus_sport_61.xml'),
|
||||
(u'Business', u'http://feeds.news.com.au/public/rss/2.0/aus_business_811.xml'),
|
||||
(u'Aviation', u'http://feeds.news.com.au/public/rss/2.0/aus_business_aviation_706.xml'),
|
||||
(u'Commercial Property', u'http://feeds.news.com.au/public/rss/2.0/aus_business_commercial_property_708.xml'),
|
||||
(u'Mining', u'http://feeds.news.com.au/public/rss/2.0/aus_business_mining_704.xml')]
|
||||
feeds = [
|
||||
(u'News', u'http://feeds.news.com.au/public/rss/2.0/aus_news_807.xml'),
|
||||
(u'Opinion', u'http://feeds.news.com.au/public/rss/2.0/aus_opinion_58.xml'),
|
||||
(u'The Nation', u'http://feeds.news.com.au/public/rss/2.0/aus_the_nation_62.xml'),
|
||||
(u'World News', u'http://feeds.news.com.au/public/rss/2.0/aus_world_808.xml'),
|
||||
(u'US Election', u'http://feeds.news.com.au/public/rss/2.0/aus_uselection_687.xml'),
|
||||
(u'Climate', u'http://feeds.news.com.au/public/rss/2.0/aus_climate_809.xml'),
|
||||
(u'Media', u'http://feeds.news.com.au/public/rss/2.0/aus_media_57.xml'),
|
||||
(u'IT', u'http://feeds.news.com.au/public/rss/2.0/ausit_itnews_topstories_367.xml'),
|
||||
(u'Exec Tech', u'http://feeds.news.com.au/public/rss/2.0/ausit_exec_topstories_385.xml'),
|
||||
(u'Higher Education', u'http://feeds.news.com.au/public/rss/2.0/aus_higher_education_56.xml'),
|
||||
(u'Arts', u'http://feeds.news.com.au/public/rss/2.0/aus_arts_51.xml'),
|
||||
(u'Travel', u'http://feeds.news.com.au/public/rss/2.0/aus_travel_and_indulgence_63.xml'),
|
||||
(u'Property', u'http://feeds.news.com.au/public/rss/2.0/aus_property_59.xml'),
|
||||
(u'Sport', u'http://feeds.news.com.au/public/rss/2.0/aus_sport_61.xml'),
|
||||
(u'Business', u'http://feeds.news.com.au/public/rss/2.0/aus_business_811.xml'),
|
||||
(u'Aviation', u'http://feeds.news.com.au/public/rss/2.0/aus_business_aviation_706.xml'),
|
||||
(u'Commercial Property', u'http://feeds.news.com.au/public/rss/2.0/aus_business_commercial_property_708.xml'),
|
||||
(u'Mining', u'http://feeds.news.com.au/public/rss/2.0/aus_business_mining_704.xml')]
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser(self)
|
||||
if self.username and self.password:
|
||||
br.open('http://www.theaustralian.com.au')
|
||||
br.select_form(nr=0)
|
||||
br.select_form(nr=1)
|
||||
br['username'] = self.username
|
||||
br['password'] = self.password
|
||||
raw = br.submit().read()
|
||||
@ -80,10 +81,11 @@ class DailyTelegraph(BasicNewsRecipe):
|
||||
def get_article_url(self, article):
|
||||
return article.id
|
||||
|
||||
#br = self.get_browser()
|
||||
#br.open(article.link).read()
|
||||
#print br.geturl()
|
||||
# br = self.get_browser()
|
||||
# br.open(article.link).read()
|
||||
# print br.geturl()
|
||||
|
||||
# return br.geturl()
|
||||
|
||||
#return br.geturl()
|
||||
|
||||
|
||||
|
86
recipes/wirtscafts_woche.recipe
Normal file
86
recipes/wirtscafts_woche.recipe
Normal file
@ -0,0 +1,86 @@
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Armin Geller'
|
||||
|
||||
'''
|
||||
Fetch WirtschaftsWoche Online
|
||||
'''
|
||||
import re
|
||||
# import time
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
class WirtschaftsWocheOnline(BasicNewsRecipe):
|
||||
title = u'WirtschaftsWoche Online'
|
||||
__author__ = 'Hegi' # Update AGE 2013-01-05; Modified by Hegi 2013-04-28
|
||||
description = u'Wirtschaftswoche Online - basierend auf den RRS-Feeds von Wiwo.de'
|
||||
tags = 'Nachrichten, Blog, Wirtschaft'
|
||||
publisher = 'Verlagsgruppe Handelsblatt GmbH / Redaktion WirtschaftsWoche Online'
|
||||
category = 'business, economy, news, Germany'
|
||||
publication_type = 'weekly magazine'
|
||||
language = 'de'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 100
|
||||
simultaneous_downloads= 20
|
||||
|
||||
auto_cleanup = False
|
||||
no_stylesheets = True
|
||||
remove_javascript = True
|
||||
remove_empty_feeds = True
|
||||
|
||||
# don't duplicate articles from "Schlagzeilen" / "Exklusiv" to other rubrics
|
||||
ignore_duplicate_articles = {'title', 'url'}
|
||||
|
||||
# if you want to reduce size for an b/w or E-ink device, uncomment this:
|
||||
# compress_news_images = True
|
||||
# compress_news_images_auto_size = 16
|
||||
# scale_news_images = (400,300)
|
||||
|
||||
timefmt = ' [%a, %d %b %Y]'
|
||||
|
||||
conversion_options = {'smarten_punctuation' : True,
|
||||
'authors' : publisher,
|
||||
'publisher' : publisher}
|
||||
language = 'de_DE'
|
||||
encoding = 'UTF-8'
|
||||
cover_source = 'http://www.wiwo-shop.de/wirtschaftswoche/wirtschaftswoche-emagazin-p1952.html'
|
||||
masthead_url = 'http://www.wiwo.de/images/wiwo_logo/5748610/1-formatOriginal.png'
|
||||
|
||||
def get_cover_url(self):
|
||||
cover_source_soup = self.index_to_soup(self.cover_source)
|
||||
preview_image_div = cover_source_soup.find(attrs={'class':'container vorschau'})
|
||||
return 'http://www.wiwo-shop.de'+preview_image_div.a.img['src']
|
||||
|
||||
# Insert ". " after "Place" in <span class="hcf-location-mark">Place</span>
|
||||
# If you use .epub format you could also do this as extra_css '.hcf-location-mark:after {content: ". "}'
|
||||
preprocess_regexps = [(re.compile(r'(<span class="hcf-location-mark">[^<]*)(</span>)',
|
||||
re.DOTALL|re.IGNORECASE), lambda match: match.group(1) + '. ' + match.group(2))]
|
||||
|
||||
extra_css = 'h1 {font-size: 1.6em; text-align: left} \
|
||||
h2 {font-size: 1em; font-style: italic; font-weight: normal} \
|
||||
h3 {font-size: 1.3em;text-align: left} \
|
||||
h4, h5, h6, a {font-size: 1em;text-align: left} \
|
||||
.hcf-caption {font-size: 1em;text-align: left; font-style: italic} \
|
||||
.hcf-location-mark {font-style: italic}'
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'class':['hcf-column hcf-column1 hcf-teasercontainer hcf-maincol']}),
|
||||
dict(name='div', attrs={'id':['contentMain']})
|
||||
]
|
||||
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'class':['hcf-link-block hcf-faq-open', 'hcf-article-related']})
|
||||
]
|
||||
|
||||
feeds = [
|
||||
(u'Schlagzeilen', u'http://www.wiwo.de/contentexport/feed/rss/schlagzeilen'),
|
||||
(u'Exklusiv', u'http://www.wiwo.de/contentexport/feed/rss/exklusiv'),
|
||||
# (u'Themen', u'http://www.wiwo.de/contentexport/feed/rss/themen'), # AGE no print version
|
||||
(u'Unternehmen', u'http://www.wiwo.de/contentexport/feed/rss/unternehmen'),
|
||||
(u'Finanzen', u'http://www.wiwo.de/contentexport/feed/rss/finanzen'),
|
||||
(u'Politik', u'http://www.wiwo.de/contentexport/feed/rss/politik'),
|
||||
(u'Erfolg', u'http://www.wiwo.de/contentexport/feed/rss/erfolg'),
|
||||
(u'Technologie', u'http://www.wiwo.de/contentexport/feed/rss/technologie'),
|
||||
# (u'Green-WiWo', u'http://green.wiwo.de/feed/rss/') # AGE no print version
|
||||
]
|
||||
def print_version(self, url):
|
||||
main, sep, id = url.rpartition('/')
|
||||
return main + '/v_detail_tab_print/' + id
|
||||
|
@ -9,8 +9,9 @@ import copy
|
||||
# http://online.wsj.com/page/us_in_todays_paper.html
|
||||
|
||||
def filter_classes(x):
|
||||
if not x: return False
|
||||
bad_classes = {'sTools', 'printSummary', 'mostPopular', 'relatedCollection'}
|
||||
if not x:
|
||||
return False
|
||||
bad_classes = {'articleInsetPoll', 'trendingNow', 'sTools', 'printSummary', 'mostPopular', 'relatedCollection'}
|
||||
classes = frozenset(x.split())
|
||||
return len(bad_classes.intersection(classes)) > 0
|
||||
|
||||
@ -42,14 +43,15 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
remove_tags_before = dict(name='h1')
|
||||
remove_tags = [
|
||||
dict(id=["articleTabs_tab_article",
|
||||
"articleTabs_tab_comments",
|
||||
'articleTabs_panel_comments', 'footer',
|
||||
"articleTabs_tab_comments", 'msnLinkback', 'yahooLinkback',
|
||||
'articleTabs_panel_comments', 'footer', 'emailThisScrim', 'emailConfScrim', 'emailErrorScrim',
|
||||
"articleTabs_tab_interactive", "articleTabs_tab_video",
|
||||
"articleTabs_tab_map", "articleTabs_tab_slideshow",
|
||||
"articleTabs_tab_quotes", "articleTabs_tab_document",
|
||||
"printModeAd", "aFbLikeAuth", "videoModule",
|
||||
"mostRecommendations", "topDiscussions"]),
|
||||
{'class':['footer_columns','network','insetCol3wide','interactive','video','slideshow','map','insettip','insetClose','more_in', "insetContent", 'articleTools_bottom', 'aTools', "tooltip", "adSummary", "nav-inline"]},
|
||||
{'class':['footer_columns','hidden', 'network','insetCol3wide','interactive','video','slideshow','map','insettip',
|
||||
'insetClose','more_in', "insetContent", 'articleTools_bottom', 'aTools', "tooltip", "adSummary", "nav-inline"]},
|
||||
dict(rel='shortcut icon'),
|
||||
{'class':filter_classes},
|
||||
]
|
||||
@ -74,7 +76,10 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
for tag in soup.findAll(name=['table', 'tr', 'td']):
|
||||
tag.name = 'div'
|
||||
|
||||
for tag in soup.findAll('div', dict(id=["articleThumbnail_1", "articleThumbnail_2", "articleThumbnail_3", "articleThumbnail_4", "articleThumbnail_5", "articleThumbnail_6", "articleThumbnail_7"])):
|
||||
for tag in soup.findAll('div', dict(id=[
|
||||
"articleThumbnail_1", "articleThumbnail_2", "articleThumbnail_3",
|
||||
"articleThumbnail_4", "articleThumbnail_5", "articleThumbnail_6",
|
||||
"articleThumbnail_7"])):
|
||||
tag.extract()
|
||||
|
||||
return soup
|
||||
@ -92,7 +97,7 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
except:
|
||||
articles = []
|
||||
if articles:
|
||||
feeds.append((title, articles))
|
||||
feeds.append((title, articles))
|
||||
return feeds
|
||||
|
||||
def abs_wsj_url(self, href):
|
||||
@ -107,7 +112,7 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
if date is not None:
|
||||
self.timefmt = ' [%s]'%self.tag_to_string(date)
|
||||
|
||||
cov = soup.find('div', attrs={'class':'itpSectionHeaderPdf'})
|
||||
cov = soup.find('div', attrs={'class':lambda x: x and 'itpSectionHeaderPdf' in x.split()})
|
||||
if cov is not None:
|
||||
a = cov.find('a', href=True)
|
||||
if a is not None:
|
||||
@ -119,16 +124,16 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
for a in div.findAll('a', href=lambda x: x and '/itp/' in x):
|
||||
pageone = a['href'].endswith('pageone')
|
||||
if pageone:
|
||||
title = 'Front Section'
|
||||
url = self.abs_wsj_url(a['href'])
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
title = "What's News"
|
||||
url = url.replace('pageone','whatsnews')
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
title = 'Front Section'
|
||||
url = self.abs_wsj_url(a['href'])
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
title = "What's News"
|
||||
url = url.replace('pageone','whatsnews')
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
else:
|
||||
title = self.tag_to_string(a)
|
||||
url = self.abs_wsj_url(a['href'])
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
title = self.tag_to_string(a)
|
||||
url = self.abs_wsj_url(a['href'])
|
||||
feeds = self.wsj_add_feed(feeds,title,url)
|
||||
return feeds
|
||||
|
||||
def wsj_find_wn_articles(self, url):
|
||||
@ -137,22 +142,22 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
|
||||
whats_news = soup.find('div', attrs={'class':lambda x: x and 'whatsNews-simple' in x})
|
||||
if whats_news is not None:
|
||||
for a in whats_news.findAll('a', href=lambda x: x and '/article/' in x):
|
||||
container = a.findParent(['p'])
|
||||
meta = a.find(attrs={'class':'meta_sectionName'})
|
||||
if meta is not None:
|
||||
meta.extract()
|
||||
title = self.tag_to_string(a).strip()
|
||||
url = a['href']
|
||||
desc = ''
|
||||
if container is not None:
|
||||
desc = self.tag_to_string(container)
|
||||
for a in whats_news.findAll('a', href=lambda x: x and '/article/' in x):
|
||||
container = a.findParent(['p'])
|
||||
meta = a.find(attrs={'class':'meta_sectionName'})
|
||||
if meta is not None:
|
||||
meta.extract()
|
||||
title = self.tag_to_string(a).strip()
|
||||
url = a['href']
|
||||
desc = ''
|
||||
if container is not None:
|
||||
desc = self.tag_to_string(container)
|
||||
|
||||
articles.append({'title':title, 'url':url,
|
||||
'description':desc, 'date':''})
|
||||
articles.append({'title':title, 'url':url,
|
||||
'description':desc, 'date':''})
|
||||
|
||||
self.log('\tFound WN article:', title)
|
||||
self.log('\t\t', desc)
|
||||
self.log('\tFound WN article:', title)
|
||||
self.log('\t\t', desc)
|
||||
|
||||
return articles
|
||||
|
||||
@ -161,18 +166,18 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
|
||||
whats_news = soup.find('div', attrs={'class':lambda x: x and 'whatsNews-simple' in x})
|
||||
if whats_news is not None:
|
||||
whats_news.extract()
|
||||
whats_news.extract()
|
||||
|
||||
articles = []
|
||||
|
||||
flavorarea = soup.find('div', attrs={'class':lambda x: x and 'ahed' in x})
|
||||
if flavorarea is not None:
|
||||
flavorstory = flavorarea.find('a', href=lambda x: x and x.startswith('/article'))
|
||||
if flavorstory is not None:
|
||||
flavorstory['class'] = 'mjLinkItem'
|
||||
metapage = soup.find('span', attrs={'class':lambda x: x and 'meta_sectionName' in x})
|
||||
if metapage is not None:
|
||||
flavorstory.append( copy.copy(metapage) ) #metapage should always be A1 because that should be first on the page
|
||||
flavorstory = flavorarea.find('a', href=lambda x: x and x.startswith('/article'))
|
||||
if flavorstory is not None:
|
||||
flavorstory['class'] = 'mjLinkItem'
|
||||
metapage = soup.find('span', attrs={'class':lambda x: x and 'meta_sectionName' in x})
|
||||
if metapage is not None:
|
||||
flavorstory.append(copy.copy(metapage)) # metapage should always be A1 because that should be first on the page
|
||||
|
||||
for a in soup.findAll('a', attrs={'class':'mjLinkItem'}, href=True):
|
||||
container = a.findParent(['li', 'div'])
|
||||
@ -199,7 +204,6 @@ class WallStreetJournal(BasicNewsRecipe):
|
||||
|
||||
return articles
|
||||
|
||||
|
||||
def cleanup(self):
|
||||
self.browser.open('http://online.wsj.com/logout?url=http://online.wsj.com')
|
||||
|
||||
|
@ -13,14 +13,14 @@ msgstr ""
|
||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||
"devel@lists.alioth.debian.org>\n"
|
||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||
"PO-Revision-Date: 2013-03-23 10:17+0000\n"
|
||||
"PO-Revision-Date: 2013-05-21 06:13+0000\n"
|
||||
"Last-Translator: Глория Хрусталёва <gloriya@hushmail.com>\n"
|
||||
"Language-Team: Russian <debian-l10n-russian@lists.debian.org>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"X-Launchpad-Export-Date: 2013-03-24 04:45+0000\n"
|
||||
"X-Generator: Launchpad (build 16540)\n"
|
||||
"X-Launchpad-Export-Date: 2013-05-22 04:38+0000\n"
|
||||
"X-Generator: Launchpad (build 16626)\n"
|
||||
"Language: ru\n"
|
||||
|
||||
#. name for aaa
|
||||
@ -5361,7 +5361,7 @@ msgstr ""
|
||||
|
||||
#. name for coa
|
||||
msgid "Malay; Cocos Islands"
|
||||
msgstr ""
|
||||
msgstr "Малайский; Кокосовые острова"
|
||||
|
||||
#. name for cob
|
||||
msgid "Chicomuceltec"
|
||||
|
@ -30,14 +30,14 @@ msgstr ""
|
||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||
"devel@lists.alioth.debian.org>\n"
|
||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||
"PO-Revision-Date: 2013-05-13 05:58+0000\n"
|
||||
"PO-Revision-Date: 2013-05-19 09:23+0000\n"
|
||||
"Last-Translator: Merarom <Unknown>\n"
|
||||
"Language-Team: Swedish <sv@li.org>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"X-Launchpad-Export-Date: 2013-05-14 05:30+0000\n"
|
||||
"X-Generator: Launchpad (build 16617)\n"
|
||||
"X-Launchpad-Export-Date: 2013-05-20 05:34+0000\n"
|
||||
"X-Generator: Launchpad (build 16626)\n"
|
||||
"Language: sv\n"
|
||||
|
||||
#. name for aaa
|
||||
@ -4582,35 +4582,35 @@ msgstr ""
|
||||
|
||||
#. name for bzl
|
||||
msgid "Boano (Sulawesi)"
|
||||
msgstr ""
|
||||
msgstr "Boano (Sulawesi/Cebeles)"
|
||||
|
||||
#. name for bzm
|
||||
msgid "Bolondo"
|
||||
msgstr ""
|
||||
msgstr "Bolondo"
|
||||
|
||||
#. name for bzn
|
||||
msgid "Boano (Maluku)"
|
||||
msgstr ""
|
||||
msgstr "Boano (Maluku)"
|
||||
|
||||
#. name for bzo
|
||||
msgid "Bozaba"
|
||||
msgstr ""
|
||||
msgstr "Bozaba"
|
||||
|
||||
#. name for bzp
|
||||
msgid "Kemberano"
|
||||
msgstr ""
|
||||
msgstr "Kemberano"
|
||||
|
||||
#. name for bzq
|
||||
msgid "Buli (Indonesia)"
|
||||
msgstr ""
|
||||
msgstr "Buli (Indonesien)"
|
||||
|
||||
#. name for bzr
|
||||
msgid "Biri"
|
||||
msgstr ""
|
||||
msgstr "Biri"
|
||||
|
||||
#. name for bzs
|
||||
msgid "Brazilian Sign Language"
|
||||
msgstr ""
|
||||
msgstr "Brasilianskt teckenspråk"
|
||||
|
||||
#. name for bzt
|
||||
msgid "Brithenig"
|
||||
@ -4618,39 +4618,39 @@ msgstr ""
|
||||
|
||||
#. name for bzu
|
||||
msgid "Burmeso"
|
||||
msgstr ""
|
||||
msgstr "Burmanska"
|
||||
|
||||
#. name for bzv
|
||||
msgid "Bebe"
|
||||
msgstr ""
|
||||
msgstr "Bebe"
|
||||
|
||||
#. name for bzw
|
||||
msgid "Basa (Nigeria)"
|
||||
msgstr ""
|
||||
msgstr "Basa (Nigeria)"
|
||||
|
||||
#. name for bzx
|
||||
msgid "Bozo; Kɛlɛngaxo"
|
||||
msgstr ""
|
||||
msgstr "Bozo; (Mali)"
|
||||
|
||||
#. name for bzy
|
||||
msgid "Obanliku"
|
||||
msgstr ""
|
||||
msgstr "Obanliku"
|
||||
|
||||
#. name for bzz
|
||||
msgid "Evant"
|
||||
msgstr ""
|
||||
msgstr "Evant"
|
||||
|
||||
#. name for caa
|
||||
msgid "Chortí"
|
||||
msgstr ""
|
||||
msgstr "Chortí"
|
||||
|
||||
#. name for cab
|
||||
msgid "Garifuna"
|
||||
msgstr ""
|
||||
msgstr "Garifuna"
|
||||
|
||||
#. name for cac
|
||||
msgid "Chuj"
|
||||
msgstr ""
|
||||
msgstr "Chuj"
|
||||
|
||||
#. name for cad
|
||||
msgid "Caddo"
|
||||
@ -4658,59 +4658,59 @@ msgstr "Caddo"
|
||||
|
||||
#. name for cae
|
||||
msgid "Lehar"
|
||||
msgstr ""
|
||||
msgstr "Lezginska"
|
||||
|
||||
#. name for caf
|
||||
msgid "Carrier; Southern"
|
||||
msgstr ""
|
||||
msgstr "Carrier; södra"
|
||||
|
||||
#. name for cag
|
||||
msgid "Nivaclé"
|
||||
msgstr ""
|
||||
msgstr "Nivaclé"
|
||||
|
||||
#. name for cah
|
||||
msgid "Cahuarano"
|
||||
msgstr ""
|
||||
msgstr "Cahuarano; Peru"
|
||||
|
||||
#. name for caj
|
||||
msgid "Chané"
|
||||
msgstr ""
|
||||
msgstr "Chané"
|
||||
|
||||
#. name for cak
|
||||
msgid "Kaqchikel"
|
||||
msgstr ""
|
||||
msgstr "Kaqchikel"
|
||||
|
||||
#. name for cal
|
||||
msgid "Carolinian"
|
||||
msgstr ""
|
||||
msgstr "Carolinian"
|
||||
|
||||
#. name for cam
|
||||
msgid "Cemuhî"
|
||||
msgstr ""
|
||||
msgstr "Cemuhî"
|
||||
|
||||
#. name for can
|
||||
msgid "Chambri"
|
||||
msgstr ""
|
||||
msgstr "Chambri"
|
||||
|
||||
#. name for cao
|
||||
msgid "Chácobo"
|
||||
msgstr ""
|
||||
msgstr "Chácobo"
|
||||
|
||||
#. name for cap
|
||||
msgid "Chipaya"
|
||||
msgstr ""
|
||||
msgstr "Chipaya"
|
||||
|
||||
#. name for caq
|
||||
msgid "Nicobarese; Car"
|
||||
msgstr ""
|
||||
msgstr "Nicobarese; Car"
|
||||
|
||||
#. name for car
|
||||
msgid "Carib; Galibi"
|
||||
msgstr ""
|
||||
msgstr "Carib; Galibi"
|
||||
|
||||
#. name for cas
|
||||
msgid "Tsimané"
|
||||
msgstr ""
|
||||
msgstr "Tsimshian; Britiska Columbia"
|
||||
|
||||
#. name for cat
|
||||
msgid "Catalan"
|
||||
@ -4718,15 +4718,15 @@ msgstr "Katalanska"
|
||||
|
||||
#. name for cav
|
||||
msgid "Cavineña"
|
||||
msgstr ""
|
||||
msgstr "Cavineña"
|
||||
|
||||
#. name for caw
|
||||
msgid "Callawalla"
|
||||
msgstr ""
|
||||
msgstr "Callawalla; Bolivia"
|
||||
|
||||
#. name for cax
|
||||
msgid "Chiquitano"
|
||||
msgstr ""
|
||||
msgstr "Chiquitano; Bolivia"
|
||||
|
||||
#. name for cay
|
||||
msgid "Cayuga"
|
||||
@ -4734,115 +4734,115 @@ msgstr ""
|
||||
|
||||
#. name for caz
|
||||
msgid "Canichana"
|
||||
msgstr ""
|
||||
msgstr "Canichana"
|
||||
|
||||
#. name for cbb
|
||||
msgid "Cabiyarí"
|
||||
msgstr ""
|
||||
msgstr "Cabiyarí"
|
||||
|
||||
#. name for cbc
|
||||
msgid "Carapana"
|
||||
msgstr ""
|
||||
msgstr "Carapana; Colombia & Brasilien"
|
||||
|
||||
#. name for cbd
|
||||
msgid "Carijona"
|
||||
msgstr ""
|
||||
msgstr "Carijona"
|
||||
|
||||
#. name for cbe
|
||||
msgid "Chipiajes"
|
||||
msgstr ""
|
||||
msgstr "Chipiajes"
|
||||
|
||||
#. name for cbg
|
||||
msgid "Chimila"
|
||||
msgstr ""
|
||||
msgstr "Chimila"
|
||||
|
||||
#. name for cbh
|
||||
msgid "Cagua"
|
||||
msgstr ""
|
||||
msgstr "Cagua;Venezuela"
|
||||
|
||||
#. name for cbi
|
||||
msgid "Chachi"
|
||||
msgstr ""
|
||||
msgstr "Chachi; Ecuador"
|
||||
|
||||
#. name for cbj
|
||||
msgid "Ede Cabe"
|
||||
msgstr ""
|
||||
msgstr "Ede Cabe"
|
||||
|
||||
#. name for cbk
|
||||
msgid "Chavacano"
|
||||
msgstr ""
|
||||
msgstr "Chavacano; Filippinerna"
|
||||
|
||||
#. name for cbl
|
||||
msgid "Chin; Bualkhaw"
|
||||
msgstr ""
|
||||
msgstr "Chin; Bualkhaw"
|
||||
|
||||
#. name for cbn
|
||||
msgid "Nyahkur"
|
||||
msgstr ""
|
||||
msgstr "Nyahkur;Australien"
|
||||
|
||||
#. name for cbo
|
||||
msgid "Izora"
|
||||
msgstr ""
|
||||
msgstr "Izora"
|
||||
|
||||
#. name for cbr
|
||||
msgid "Cashibo-Cacataibo"
|
||||
msgstr ""
|
||||
msgstr "Cashibo-Cacataibo;Peru"
|
||||
|
||||
#. name for cbs
|
||||
msgid "Cashinahua"
|
||||
msgstr ""
|
||||
msgstr "Cashinahua;Peru"
|
||||
|
||||
#. name for cbt
|
||||
msgid "Chayahuita"
|
||||
msgstr ""
|
||||
msgstr "Chayahuita;Peru"
|
||||
|
||||
#. name for cbu
|
||||
msgid "Candoshi-Shapra"
|
||||
msgstr ""
|
||||
msgstr "Candoshi-Shapra;Peru"
|
||||
|
||||
#. name for cbv
|
||||
msgid "Cacua"
|
||||
msgstr ""
|
||||
msgstr "Cacua;Colombia"
|
||||
|
||||
#. name for cbw
|
||||
msgid "Kinabalian"
|
||||
msgstr ""
|
||||
msgstr "Kinabalian;sydöstra Filippinerna"
|
||||
|
||||
#. name for cby
|
||||
msgid "Carabayo"
|
||||
msgstr ""
|
||||
msgstr "Carabayo;Colombia"
|
||||
|
||||
#. name for cca
|
||||
msgid "Cauca"
|
||||
msgstr ""
|
||||
msgstr "Cauca;Colombia & Panama"
|
||||
|
||||
#. name for ccc
|
||||
msgid "Chamicuro"
|
||||
msgstr ""
|
||||
msgstr "Chamicuro;Peru"
|
||||
|
||||
#. name for ccd
|
||||
msgid "Creole; Cafundo"
|
||||
msgstr ""
|
||||
msgstr "Creole; Cafundo; Brasilien"
|
||||
|
||||
#. name for cce
|
||||
msgid "Chopi"
|
||||
msgstr ""
|
||||
msgstr "Chopi;Moçambique"
|
||||
|
||||
#. name for ccg
|
||||
msgid "Daka; Samba"
|
||||
msgstr ""
|
||||
msgstr "Daka; Samba, Nigeria"
|
||||
|
||||
#. name for cch
|
||||
msgid "Atsam"
|
||||
msgstr ""
|
||||
msgstr "Atsam"
|
||||
|
||||
#. name for ccj
|
||||
msgid "Kasanga"
|
||||
msgstr ""
|
||||
msgstr "Kasanga"
|
||||
|
||||
#. name for ccl
|
||||
msgid "Cutchi-Swahili"
|
||||
msgstr ""
|
||||
msgstr "Cutchi-Swahili"
|
||||
|
||||
#. name for ccm
|
||||
msgid "Creole Malay; Malaccan"
|
||||
@ -4850,75 +4850,75 @@ msgstr ""
|
||||
|
||||
#. name for cco
|
||||
msgid "Chinantec; Comaltepec"
|
||||
msgstr ""
|
||||
msgstr "Chinantec; Comaltepec"
|
||||
|
||||
#. name for ccp
|
||||
msgid "Chakma"
|
||||
msgstr ""
|
||||
msgstr "Chakma"
|
||||
|
||||
#. name for ccq
|
||||
msgid "Chaungtha"
|
||||
msgstr ""
|
||||
msgstr "Chaungtha"
|
||||
|
||||
#. name for ccr
|
||||
msgid "Cacaopera"
|
||||
msgstr ""
|
||||
msgstr "Cacaopera"
|
||||
|
||||
#. name for cda
|
||||
msgid "Choni"
|
||||
msgstr ""
|
||||
msgstr "Choni"
|
||||
|
||||
#. name for cde
|
||||
msgid "Chenchu"
|
||||
msgstr ""
|
||||
msgstr "Chenchu"
|
||||
|
||||
#. name for cdf
|
||||
msgid "Chiru"
|
||||
msgstr ""
|
||||
msgstr "Chiru"
|
||||
|
||||
#. name for cdg
|
||||
msgid "Chamari"
|
||||
msgstr ""
|
||||
msgstr "Chamari"
|
||||
|
||||
#. name for cdh
|
||||
msgid "Chambeali"
|
||||
msgstr ""
|
||||
msgstr "Chambeali"
|
||||
|
||||
#. name for cdi
|
||||
msgid "Chodri"
|
||||
msgstr ""
|
||||
msgstr "Chodri"
|
||||
|
||||
#. name for cdj
|
||||
msgid "Churahi"
|
||||
msgstr ""
|
||||
msgstr "Churahi"
|
||||
|
||||
#. name for cdm
|
||||
msgid "Chepang"
|
||||
msgstr ""
|
||||
msgstr "Chepang"
|
||||
|
||||
#. name for cdn
|
||||
msgid "Chaudangsi"
|
||||
msgstr ""
|
||||
msgstr "Chaudangsi"
|
||||
|
||||
#. name for cdo
|
||||
msgid "Chinese; Min Dong"
|
||||
msgstr ""
|
||||
msgstr "Kinesiska; Min Dong"
|
||||
|
||||
#. name for cdr
|
||||
msgid "Cinda-Regi-Tiyal"
|
||||
msgstr ""
|
||||
msgstr "Cinda-Regi-Tiyal"
|
||||
|
||||
#. name for cds
|
||||
msgid "Chadian Sign Language"
|
||||
msgstr ""
|
||||
msgstr "Chadian teckenspråk"
|
||||
|
||||
#. name for cdy
|
||||
msgid "Chadong"
|
||||
msgstr ""
|
||||
msgstr "Chadong"
|
||||
|
||||
#. name for cdz
|
||||
msgid "Koda"
|
||||
msgstr ""
|
||||
msgstr "Koda"
|
||||
|
||||
#. name for cea
|
||||
msgid "Chehalis; Lower"
|
||||
@ -4930,11 +4930,11 @@ msgstr "Cebuano"
|
||||
|
||||
#. name for ceg
|
||||
msgid "Chamacoco"
|
||||
msgstr ""
|
||||
msgstr "Chamacoco"
|
||||
|
||||
#. name for cen
|
||||
msgid "Cen"
|
||||
msgstr ""
|
||||
msgstr "Cen"
|
||||
|
||||
#. name for ces
|
||||
msgid "Czech"
|
||||
@ -4942,7 +4942,7 @@ msgstr "Tjeckiska"
|
||||
|
||||
#. name for cet
|
||||
msgid "Centúúm"
|
||||
msgstr ""
|
||||
msgstr "Centúúm"
|
||||
|
||||
#. name for cfa
|
||||
msgid "Dijim-Bwilim"
|
||||
@ -4950,31 +4950,31 @@ msgstr ""
|
||||
|
||||
#. name for cfd
|
||||
msgid "Cara"
|
||||
msgstr ""
|
||||
msgstr "Cara"
|
||||
|
||||
#. name for cfg
|
||||
msgid "Como Karim"
|
||||
msgstr ""
|
||||
msgstr "Como Karim"
|
||||
|
||||
#. name for cfm
|
||||
msgid "Chin; Falam"
|
||||
msgstr ""
|
||||
msgstr "Chin; Falam"
|
||||
|
||||
#. name for cga
|
||||
msgid "Changriwa"
|
||||
msgstr ""
|
||||
msgstr "Changriwa"
|
||||
|
||||
#. name for cgc
|
||||
msgid "Kagayanen"
|
||||
msgstr ""
|
||||
msgstr "Kagayanen"
|
||||
|
||||
#. name for cgg
|
||||
msgid "Chiga"
|
||||
msgstr ""
|
||||
msgstr "Chiga"
|
||||
|
||||
#. name for cgk
|
||||
msgid "Chocangacakha"
|
||||
msgstr ""
|
||||
msgstr "Chocangacakha; Butan"
|
||||
|
||||
#. name for cha
|
||||
msgid "Chamorro"
|
||||
@ -4986,11 +4986,11 @@ msgstr "Chibcha"
|
||||
|
||||
#. name for chc
|
||||
msgid "Catawba"
|
||||
msgstr ""
|
||||
msgstr "Catawba"
|
||||
|
||||
#. name for chd
|
||||
msgid "Chontal; Highland Oaxaca"
|
||||
msgstr ""
|
||||
msgstr "Chontal; Highland Oaxaca; Mexico"
|
||||
|
||||
#. name for che
|
||||
msgid "Chechen"
|
||||
@ -4998,7 +4998,7 @@ msgstr "Tjetjenska"
|
||||
|
||||
#. name for chf
|
||||
msgid "Chontal; Tabasco"
|
||||
msgstr ""
|
||||
msgstr "Chontal; Tabasco"
|
||||
|
||||
#. name for chg
|
||||
msgid "Chagatai"
|
||||
@ -5006,7 +5006,7 @@ msgstr "Chagatai"
|
||||
|
||||
#. name for chh
|
||||
msgid "Chinook"
|
||||
msgstr ""
|
||||
msgstr "Chinook"
|
||||
|
||||
#. name for chj
|
||||
msgid "Chinantec; Ojitlán"
|
||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
__appname__ = u'calibre'
|
||||
numeric_version = (0, 9, 31)
|
||||
numeric_version = (0, 9, 32)
|
||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||
|
||||
|
@ -1661,6 +1661,7 @@ class StoreWoblinkStore(StoreBase):
|
||||
|
||||
headquarters = 'PL'
|
||||
formats = ['EPUB', 'MOBI', 'PDF', 'WOBLINK']
|
||||
affiliate = True
|
||||
|
||||
class XinXiiStore(StoreBase):
|
||||
name = 'XinXii'
|
||||
|
@ -25,7 +25,7 @@ class ANDROID(USBMS):
|
||||
|
||||
VENDOR_ID = {
|
||||
# HTC
|
||||
0x0bb4 : { 0xc02 : HTC_BCDS,
|
||||
0x0bb4 : {0xc02 : HTC_BCDS,
|
||||
0xc01 : HTC_BCDS,
|
||||
0xff9 : HTC_BCDS,
|
||||
0xc86 : HTC_BCDS,
|
||||
@ -52,13 +52,13 @@ class ANDROID(USBMS):
|
||||
},
|
||||
|
||||
# Eken
|
||||
0x040d : { 0x8510 : [0x0001], 0x0851 : [0x1] },
|
||||
0x040d : {0x8510 : [0x0001], 0x0851 : [0x1]},
|
||||
|
||||
# Trekstor
|
||||
0x1e68 : { 0x006a : [0x0231] },
|
||||
0x1e68 : {0x006a : [0x0231]},
|
||||
|
||||
# Motorola
|
||||
0x22b8 : { 0x41d9 : [0x216], 0x2d61 : [0x100], 0x2d67 : [0x100],
|
||||
0x22b8 : {0x41d9 : [0x216], 0x2d61 : [0x100], 0x2d67 : [0x100],
|
||||
0x2de8 : [0x229],
|
||||
0x41db : [0x216], 0x4285 : [0x216], 0x42a3 : [0x216],
|
||||
0x4286 : [0x216], 0x42b3 : [0x216], 0x42b4 : [0x216],
|
||||
@ -111,7 +111,7 @@ class ANDROID(USBMS):
|
||||
},
|
||||
|
||||
# Samsung
|
||||
0x04e8 : { 0x681d : [0x0222, 0x0223, 0x0224, 0x0400],
|
||||
0x04e8 : {0x681d : [0x0222, 0x0223, 0x0224, 0x0400],
|
||||
0x681c : [0x0222, 0x0223, 0x0224, 0x0400],
|
||||
0x6640 : [0x0100],
|
||||
0x685b : [0x0400, 0x0226],
|
||||
@ -130,7 +130,7 @@ class ANDROID(USBMS):
|
||||
0xc001 : [0x0226],
|
||||
0xc004 : [0x0226],
|
||||
0x8801 : [0x0226, 0x0227],
|
||||
0xe115 : [0x0216], # PocketBook A10
|
||||
0xe115 : [0x0216], # PocketBook A10
|
||||
},
|
||||
|
||||
# Another Viewsonic
|
||||
@ -139,10 +139,10 @@ class ANDROID(USBMS):
|
||||
},
|
||||
|
||||
# Acer
|
||||
0x502 : { 0x3203 : [0x0100, 0x224]},
|
||||
0x502 : {0x3203 : [0x0100, 0x224]},
|
||||
|
||||
# Dell
|
||||
0x413c : { 0xb007 : [0x0100, 0x0224, 0x0226]},
|
||||
0x413c : {0xb007 : [0x0100, 0x0224, 0x0226]},
|
||||
|
||||
# LG
|
||||
0x1004 : {
|
||||
@ -166,25 +166,25 @@ class ANDROID(USBMS):
|
||||
|
||||
# Huawei
|
||||
# Disabled as this USB id is used by various USB flash drives
|
||||
#0x45e : { 0x00e1 : [0x007], },
|
||||
# 0x45e : { 0x00e1 : [0x007], },
|
||||
|
||||
# T-Mobile
|
||||
0x0408 : { 0x03ba : [0x0109], },
|
||||
0x0408 : {0x03ba : [0x0109], },
|
||||
|
||||
# Xperia
|
||||
0x13d3 : { 0x3304 : [0x0001, 0x0002] },
|
||||
0x13d3 : {0x3304 : [0x0001, 0x0002]},
|
||||
|
||||
# CREEL?? Also Nextbook and Wayteq
|
||||
0x5e3 : { 0x726 : [0x222] },
|
||||
0x5e3 : {0x726 : [0x222]},
|
||||
|
||||
# ZTE
|
||||
0x19d2 : { 0x1353 : [0x226], 0x1351 : [0x227] },
|
||||
0x19d2 : {0x1353 : [0x226], 0x1351 : [0x227]},
|
||||
|
||||
# Advent
|
||||
0x0955 : { 0x7100 : [0x9999] }, # This is the same as the Notion Ink Adam
|
||||
0x0955 : {0x7100 : [0x9999]}, # This is the same as the Notion Ink Adam
|
||||
|
||||
# Kobo
|
||||
0x2237: { 0x2208 : [0x0226] },
|
||||
0x2237: {0x2208 : [0x0226]},
|
||||
|
||||
# Lenovo
|
||||
0x17ef : {
|
||||
@ -193,10 +193,10 @@ class ANDROID(USBMS):
|
||||
},
|
||||
|
||||
# Pantech
|
||||
0x10a9 : { 0x6050 : [0x227] },
|
||||
0x10a9 : {0x6050 : [0x227]},
|
||||
|
||||
# Prestigio and Teclast
|
||||
0x2207 : { 0 : [0x222], 0x10 : [0x222] },
|
||||
0x2207 : {0 : [0x222], 0x10 : [0x222]},
|
||||
|
||||
}
|
||||
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
||||
@ -219,7 +219,7 @@ class ANDROID(USBMS):
|
||||
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
||||
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
||||
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12',
|
||||
'MEDIATEK', 'KEENHI', 'TECLAST', 'SURFTAB']
|
||||
'MEDIATEK', 'KEENHI', 'TECLAST', 'SURFTAB', 'XENTA',]
|
||||
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'A953', 'INC.NEXUS_ONE',
|
||||
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
||||
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
||||
@ -241,6 +241,7 @@ class ANDROID(USBMS):
|
||||
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS',
|
||||
'ICS', 'E400', '__FILE-STOR_GADG', 'ST80208-1', 'GT-S5660M_CARD', 'XT894', '_USB',
|
||||
'PROD_TAB13-201',
|
||||
]
|
||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||
@ -253,7 +254,7 @@ class ANDROID(USBMS):
|
||||
'UMS_COMPOSITE', 'PRO', '.KOBO_VOX', 'SGH-T989_CARD', 'SGH-I727',
|
||||
'USB_FLASH_DRIVER', 'ANDROID', 'MID7042', '7035', 'VIEWPAD_7E',
|
||||
'NOVO7', 'ADVANCED', 'TABLET_PC', 'F', 'E400_SD_CARD', 'ST80208-1', 'XT894',
|
||||
'_USB',
|
||||
'_USB', 'PROD_TAB13-201',
|
||||
]
|
||||
|
||||
OSX_MAIN_MEM = 'Android Device Main Memory'
|
||||
@ -369,7 +370,6 @@ class WEBOS(USBMS):
|
||||
except ImportError:
|
||||
import Image, ImageDraw
|
||||
|
||||
|
||||
coverdata = getattr(metadata, 'thumbnail', None)
|
||||
if coverdata and coverdata[2]:
|
||||
cover = Image.open(cStringIO.StringIO(coverdata[2]))
|
||||
@ -418,3 +418,4 @@ class WEBOS(USBMS):
|
||||
coverfile.write(coverdata)
|
||||
|
||||
|
||||
|
||||
|
@ -279,11 +279,11 @@ class POCKETBOOK602(USBMS):
|
||||
class POCKETBOOK622(POCKETBOOK602):
|
||||
|
||||
name = 'PocketBook 622 Device Interface'
|
||||
description = _('Communicate with the PocketBook 622 reader.')
|
||||
description = _('Communicate with the PocketBook 622 and 623 readers.')
|
||||
EBOOK_DIR_MAIN = ''
|
||||
|
||||
VENDOR_ID = [0x0489]
|
||||
PRODUCT_ID = [0xe107]
|
||||
PRODUCT_ID = [0xe107, 0xcff1]
|
||||
BCD = [0x0326]
|
||||
|
||||
VENDOR_NAME = 'LINUX'
|
||||
|
@ -92,7 +92,7 @@ class NOOK_COLOR(NOOK):
|
||||
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['EBOOK_DISK', 'NOOK_TABLET',
|
||||
'NOOK_SIMPLETOUCH']
|
||||
EBOOK_DIR_MAIN = 'My Files'
|
||||
# SCAN_FROM_ROOT = True
|
||||
SCAN_FROM_ROOT = True
|
||||
NEWS_IN_FOLDER = False
|
||||
|
||||
def upload_cover(self, path, filename, metadata, filepath):
|
||||
|
@ -74,7 +74,7 @@ def read_border(parent, dest):
|
||||
|
||||
for border in XPath('./w:pBdr')(parent):
|
||||
for edge in ('left', 'top', 'right', 'bottom'):
|
||||
for elem in XPath('./w:%s' % edge):
|
||||
for elem in XPath('./w:%s' % edge)(border):
|
||||
color = get(elem, 'w:color')
|
||||
if color is not None:
|
||||
vals['border_%s_color' % edge] = simple_color(color)
|
||||
@ -151,8 +151,8 @@ def read_spacing(parent, dest):
|
||||
|
||||
l, lr = get(s, 'w:line'), get(s, 'w:lineRule', 'auto')
|
||||
if l is not None:
|
||||
lh = simple_float(l, 0.05) if lr in {'exactly', 'atLeast'} else simple_float(l, 1/240.0)
|
||||
line_height = '%.3g%s' % (lh, 'pt' if lr in {'exactly', 'atLeast'} else '')
|
||||
lh = simple_float(l, 0.05) if lr in {'exact', 'atLeast'} else simple_float(l, 1/240.0)
|
||||
line_height = '%.3g%s' % (lh, 'pt' if lr in {'exact', 'atLeast'} else '')
|
||||
|
||||
setattr(dest, 'margin_top', padding_top)
|
||||
setattr(dest, 'margin_bottom', padding_bottom)
|
||||
@ -189,6 +189,89 @@ def read_numbering(parent, dest):
|
||||
val = (num_id, lvl) if num_id is not None or lvl is not None else inherit
|
||||
setattr(dest, 'numbering', val)
|
||||
|
||||
class Frame(object):
|
||||
|
||||
all_attributes = ('drop_cap', 'h', 'w', 'h_anchor', 'h_rule', 'v_anchor', 'wrap',
|
||||
'h_space', 'v_space', 'lines', 'x_align', 'y_align', 'x', 'y')
|
||||
|
||||
def __init__(self, fp):
|
||||
self.drop_cap = get(fp, 'w:dropCap', 'none')
|
||||
try:
|
||||
self.h = int(get(fp, 'w:h'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.h = 0
|
||||
try:
|
||||
self.w = int(get(fp, 'w:w'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.w = None
|
||||
try:
|
||||
self.x = int(get(fp, 'w:x'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.x = 0
|
||||
try:
|
||||
self.y = int(get(fp, 'w:y'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.y = 0
|
||||
|
||||
self.h_anchor = get(fp, 'w:hAnchor', 'page')
|
||||
self.h_rule = get(fp, 'w:hRule', 'auto')
|
||||
self.v_anchor = get(fp, 'w:vAnchor', 'page')
|
||||
self.wrap = get(fp, 'w:wrap', 'around')
|
||||
self.x_align = get(fp, 'w:xAlign')
|
||||
self.y_align = get(fp, 'w:yAlign')
|
||||
|
||||
try:
|
||||
self.h_space = int(get(fp, 'w:hSpace'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.h_space = 0
|
||||
try:
|
||||
self.v_space = int(get(fp, 'w:vSpace'))/20
|
||||
except (ValueError, TypeError):
|
||||
self.v_space = 0
|
||||
try:
|
||||
self.lines = int(get(fp, 'w:lines'))
|
||||
except (ValueError, TypeError):
|
||||
self.lines = 1
|
||||
|
||||
def css(self, page):
|
||||
is_dropcap = self.drop_cap in {'drop', 'margin'}
|
||||
ans = {'overflow': 'hidden'}
|
||||
|
||||
if is_dropcap:
|
||||
ans['float'] = 'left'
|
||||
ans['margin'] = '0'
|
||||
ans['padding-right'] = '0.2em'
|
||||
else:
|
||||
if self.h_rule != 'auto':
|
||||
t = 'min-height' if self.h_rule == 'atLeast' else 'height'
|
||||
ans[t] = '%.3gpt' % self.h
|
||||
if self.w is not None:
|
||||
ans['width'] = '%.3gpt' % self.w
|
||||
ans['padding-top'] = ans['padding-bottom'] = '%.3gpt' % self.v_space
|
||||
if self.wrap not in {None, 'none'}:
|
||||
ans['padding-left'] = ans['padding-right'] = '%.3gpt' % self.h_space
|
||||
if self.x_align is None:
|
||||
fl = 'left' if self.x/page.width < 0.5 else 'right'
|
||||
else:
|
||||
fl = 'right' if self.x_align == 'right' else 'left'
|
||||
ans['float'] = fl
|
||||
return ans
|
||||
|
||||
def __eq__(self, other):
|
||||
for x in self.all_attributes:
|
||||
if getattr(other, x, inherit) != getattr(self, x):
|
||||
return False
|
||||
return True
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
def read_frame(parent, dest):
|
||||
ans = inherit
|
||||
for fp in XPath('./w:framePr')(parent):
|
||||
ans = Frame(fp)
|
||||
setattr(dest, 'frame', ans)
|
||||
|
||||
# }}}
|
||||
|
||||
class ParagraphStyle(object):
|
||||
@ -208,7 +291,7 @@ class ParagraphStyle(object):
|
||||
|
||||
# Misc.
|
||||
'text_indent', 'text_align', 'line_height', 'direction', 'background_color',
|
||||
'numbering', 'font_family', 'font_size',
|
||||
'numbering', 'font_family', 'font_size', 'frame',
|
||||
)
|
||||
|
||||
def __init__(self, pPr=None):
|
||||
@ -225,7 +308,7 @@ class ParagraphStyle(object):
|
||||
):
|
||||
setattr(self, p, binary_property(pPr, p))
|
||||
|
||||
for x in ('border', 'indent', 'justification', 'spacing', 'direction', 'shd', 'numbering'):
|
||||
for x in ('border', 'indent', 'justification', 'spacing', 'direction', 'shd', 'numbering', 'frame'):
|
||||
f = globals()['read_%s' % x]
|
||||
f(pPr, self)
|
||||
|
||||
@ -286,5 +369,3 @@ class ParagraphStyle(object):
|
||||
return self._css
|
||||
|
||||
# TODO: keepNext must be done at markup level
|
||||
|
||||
|
||||
|
@ -11,7 +11,7 @@ import os, sys, shutil
|
||||
from lxml import etree
|
||||
|
||||
from calibre import walk, guess_type
|
||||
from calibre.ebooks.metadata import string_to_authors
|
||||
from calibre.ebooks.metadata import string_to_authors, authors_to_sort_string
|
||||
from calibre.ebooks.metadata.book.base import Metadata
|
||||
from calibre.ebooks.docx import InvalidDOCX
|
||||
from calibre.ebooks.docx.names import DOCUMENT, DOCPROPS, XPath, APPPROPS
|
||||
@ -49,6 +49,7 @@ def read_doc_props(raw, mi):
|
||||
aut.extend(string_to_authors(author.text))
|
||||
if aut:
|
||||
mi.authors = aut
|
||||
mi.author_sort = authors_to_sort_string(aut)
|
||||
|
||||
desc = XPath('//dc:description')(root)
|
||||
if desc:
|
||||
@ -181,7 +182,9 @@ class DOCX(object):
|
||||
else:
|
||||
root = fromstring(raw)
|
||||
for item in root.xpath('//*[local-name()="Relationships"]/*[local-name()="Relationship" and @Type and @Target]'):
|
||||
target = '/'.join((base, item.get('Target').lstrip('/')))
|
||||
target = item.get('Target')
|
||||
if item.get('TargetMode', None) != 'External':
|
||||
target = '/'.join((base, target.lstrip('/')))
|
||||
typ = item.get('Type')
|
||||
Id = item.get('Id')
|
||||
by_id[Id] = by_type[typ] = target
|
||||
|
62
src/calibre/ebooks/docx/footnotes.py
Normal file
62
src/calibre/ebooks/docx/footnotes.py
Normal file
@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=utf-8
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from calibre.ebooks.docx.names import get, XPath, descendants
|
||||
|
||||
class Note(object):
|
||||
|
||||
def __init__(self, parent):
|
||||
self.type = get(parent, 'w:type', 'normal')
|
||||
self.parent = parent
|
||||
|
||||
def __iter__(self):
|
||||
for p in descendants(self.parent, 'w:p'):
|
||||
yield p
|
||||
|
||||
class Footnotes(object):
|
||||
|
||||
def __init__(self):
|
||||
self.footnotes = {}
|
||||
self.endnotes = {}
|
||||
self.counter = 0
|
||||
self.notes = OrderedDict()
|
||||
|
||||
def __call__(self, footnotes, endnotes):
|
||||
if footnotes is not None:
|
||||
for footnote in XPath('./w:footnote[@w:id]')(footnotes):
|
||||
fid = get(footnote, 'w:id')
|
||||
if fid:
|
||||
self.footnotes[fid] = Note(footnote)
|
||||
|
||||
if endnotes is not None:
|
||||
for endnote in XPath('./w:endnote[@w:id]')(endnotes):
|
||||
fid = get(endnote, 'w:id')
|
||||
if fid:
|
||||
self.endnotes[fid] = Note(endnote)
|
||||
|
||||
def get_ref(self, ref):
|
||||
fid = get(ref, 'w:id')
|
||||
notes = self.footnotes if ref.tag.endswith('}footnoteReference') else self.endnotes
|
||||
note = notes.get(fid, None)
|
||||
if note is not None and note.type == 'normal':
|
||||
self.counter += 1
|
||||
anchor = 'note_%d' % self.counter
|
||||
self.notes[anchor] = (type('')(self.counter), note)
|
||||
return anchor, type('')(self.counter)
|
||||
return None, None
|
||||
|
||||
def __iter__(self):
|
||||
for anchor, (counter, note) in self.notes.iteritems():
|
||||
yield anchor, counter, note
|
||||
|
||||
@property
|
||||
def has_notes(self):
|
||||
return bool(self.notes)
|
||||
|
205
src/calibre/ebooks/docx/images.py
Normal file
205
src/calibre/ebooks/docx/images.py
Normal file
@ -0,0 +1,205 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=utf-8
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
|
||||
import os
|
||||
|
||||
from lxml.html.builder import IMG
|
||||
|
||||
from calibre.ebooks.docx.names import XPath, get, barename
|
||||
from calibre.utils.filenames import ascii_filename
|
||||
from calibre.utils.imghdr import what
|
||||
|
||||
def emu_to_pt(x):
|
||||
return x / 12700
|
||||
|
||||
def get_image_properties(parent):
|
||||
width = height = None
|
||||
for extent in XPath('./wp:extent')(parent):
|
||||
try:
|
||||
width = emu_to_pt(int(extent.get('cx')))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
try:
|
||||
height = emu_to_pt(int(extent.get('cy')))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
ans = {}
|
||||
if width is not None:
|
||||
ans['width'] = '%.3gpt' % width
|
||||
if height is not None:
|
||||
ans['height'] = '%.3gpt' % height
|
||||
|
||||
alt = None
|
||||
for docPr in XPath('./wp:docPr')(parent):
|
||||
x = docPr.get('descr', None)
|
||||
if x:
|
||||
alt = x
|
||||
if docPr.get('hidden', None) in {'true', 'on', '1'}:
|
||||
ans['display'] = 'none'
|
||||
|
||||
return ans, alt
|
||||
|
||||
|
||||
def get_image_margins(elem):
|
||||
ans = {}
|
||||
for w, css in {'L':'left', 'T':'top', 'R':'right', 'B':'bottom'}.iteritems():
|
||||
val = elem.get('dist%s' % w, None)
|
||||
if val is not None:
|
||||
try:
|
||||
val = emu_to_pt(val)
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
ans['padding-%s' % css] = '%.3gpt' % val
|
||||
return ans
|
||||
|
||||
def get_hpos(anchor, page_width):
|
||||
for ph in XPath('./wp:positionH')(anchor):
|
||||
rp = ph.get('relativeFrom', None)
|
||||
if rp == 'leftMargin':
|
||||
return 0
|
||||
if rp == 'rightMargin':
|
||||
return 1
|
||||
for align in XPath('./wp:align')(ph):
|
||||
al = align.text
|
||||
if al == 'left':
|
||||
return 0
|
||||
if al == 'center':
|
||||
return 0.5
|
||||
if al == 'right':
|
||||
return 1
|
||||
for po in XPath('./wp:posOffset')(ph):
|
||||
try:
|
||||
pos = emu_to_pt(int(po.text))
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
return pos/page_width
|
||||
|
||||
for sp in XPath('./wp:simplePos')(anchor):
|
||||
try:
|
||||
x = emu_to_pt(sp.get('x', None))
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
return x/page_width
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
class Images(object):
|
||||
|
||||
def __init__(self):
|
||||
self.rid_map = {}
|
||||
self.used = {}
|
||||
self.names = set()
|
||||
self.all_images = set()
|
||||
|
||||
def __call__(self, relationships_by_id):
|
||||
self.rid_map = relationships_by_id
|
||||
|
||||
def generate_filename(self, rid, base=None):
|
||||
if rid in self.used:
|
||||
return self.used[rid]
|
||||
raw = self.docx.read(self.rid_map[rid])
|
||||
base = base or ascii_filename(self.rid_map[rid].rpartition('/')[-1]).replace(' ', '_')
|
||||
ext = what(None, raw) or base.rpartition('.')[-1] or 'jpeg'
|
||||
base = base.rpartition('.')[0] + '.' + ext
|
||||
exists = frozenset(self.used.itervalues())
|
||||
c = 1
|
||||
while base in exists:
|
||||
n, e = base.rpartition('.')[0::2]
|
||||
base = '%s-%d.%s' % (n, c, e)
|
||||
c += 1
|
||||
self.used[rid] = base
|
||||
with open(os.path.join(self.dest_dir, base), 'wb') as f:
|
||||
f.write(raw)
|
||||
self.all_images.add('images/' + base)
|
||||
return base
|
||||
|
||||
def pic_to_img(self, pic, alt=None):
|
||||
name = None
|
||||
for pr in XPath('descendant::pic:cNvPr')(pic):
|
||||
name = pr.get('name', None)
|
||||
if name:
|
||||
name = ascii_filename(name).replace(' ', '_')
|
||||
alt = pr.get('descr', None)
|
||||
for a in XPath('descendant::a:blip[@r:embed]')(pic):
|
||||
rid = get(a, 'r:embed')
|
||||
if rid in self.rid_map:
|
||||
src = self.generate_filename(rid, name)
|
||||
img = IMG(src='images/%s' % src)
|
||||
if alt:
|
||||
img(alt=alt)
|
||||
return img
|
||||
|
||||
def drawing_to_html(self, drawing, page):
|
||||
# First process the inline pictures
|
||||
for inline in XPath('./wp:inline')(drawing):
|
||||
style, alt = get_image_properties(inline)
|
||||
for pic in XPath('descendant::pic:pic')(inline):
|
||||
ans = self.pic_to_img(pic, alt)
|
||||
if ans is not None:
|
||||
if style:
|
||||
ans.set('style', '; '.join('%s: %s' % (k, v) for k, v in style.iteritems()))
|
||||
yield ans
|
||||
|
||||
# Now process the floats
|
||||
for anchor in XPath('./wp:anchor')(drawing):
|
||||
style, alt = get_image_properties(anchor)
|
||||
self.get_float_properties(anchor, style, page)
|
||||
for pic in XPath('descendant::pic:pic')(anchor):
|
||||
ans = self.pic_to_img(pic, alt)
|
||||
if ans is not None:
|
||||
if style:
|
||||
ans.set('style', '; '.join('%s: %s' % (k, v) for k, v in style.iteritems()))
|
||||
yield ans
|
||||
|
||||
def get_float_properties(self, anchor, style, page):
|
||||
if 'display' not in style:
|
||||
style['display'] = 'block'
|
||||
padding = get_image_margins(anchor)
|
||||
width = float(style.get('width', '100pt')[:-2])
|
||||
|
||||
page_width = page.width - page.margin_left - page.margin_right
|
||||
|
||||
hpos = get_hpos(anchor, page_width) + width/(2*page_width)
|
||||
|
||||
wrap_elem = None
|
||||
dofloat = False
|
||||
|
||||
for child in reversed(anchor):
|
||||
bt = barename(child.tag)
|
||||
if bt in {'wrapNone', 'wrapSquare', 'wrapThrough', 'wrapTight', 'wrapTopAndBottom'}:
|
||||
wrap_elem = child
|
||||
dofloat = bt not in {'wrapNone', 'wrapTopAndBottom'}
|
||||
break
|
||||
|
||||
if wrap_elem is not None:
|
||||
padding.update(get_image_margins(wrap_elem))
|
||||
wt = wrap_elem.get('wrapText', None)
|
||||
hpos = 0 if wt == 'right' else 1 if wt == 'left' else hpos
|
||||
if dofloat:
|
||||
style['float'] = 'left' if hpos < 0.65 else 'right'
|
||||
else:
|
||||
ml, mr = (None, None) if hpos < 0.34 else ('auto', None) if hpos > 0.65 else ('auto', 'auto')
|
||||
if ml is not None:
|
||||
style['margin-left'] = ml
|
||||
if mr is not None:
|
||||
style['margin-right'] = mr
|
||||
|
||||
style.update(padding)
|
||||
|
||||
def to_html(self, elem, page, docx, dest_dir):
|
||||
dest = os.path.join(dest_dir, 'images')
|
||||
if not os.path.exists(dest):
|
||||
os.mkdir(dest)
|
||||
self.dest_dir, self.docx = dest, docx
|
||||
if elem.tag.endswith('}drawing'):
|
||||
for tag in self.drawing_to_html(elem, page):
|
||||
yield tag
|
||||
# TODO: Handle w:pict
|
||||
|
||||
|
@ -6,14 +6,23 @@ from __future__ import (unicode_literals, division, absolute_import,
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
|
||||
import re
|
||||
from future_builtins import map
|
||||
|
||||
from lxml.etree import XPath as X
|
||||
|
||||
from calibre.utils.filenames import ascii_text
|
||||
|
||||
DOCUMENT = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument'
|
||||
DOCPROPS = 'http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties'
|
||||
APPPROPS = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties'
|
||||
STYLES = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/styles'
|
||||
NUMBERING = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/numbering'
|
||||
FONTS = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/fontTable'
|
||||
IMAGES = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/image'
|
||||
LINKS = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/hyperlink'
|
||||
FOOTNOTES = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/footnotes'
|
||||
ENDNOTES = 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/endnotes'
|
||||
|
||||
namespaces = {
|
||||
'mo': 'http://schemas.microsoft.com/office/mac/office/2008/main',
|
||||
@ -65,7 +74,32 @@ def barename(x):
|
||||
def XML(x):
|
||||
return '{%s}%s' % (namespaces['xml'], x)
|
||||
|
||||
def get(x, attr, default=None):
|
||||
ns, name = attr.partition(':')[0::2]
|
||||
return x.attrib.get('{%s}%s' % (namespaces[ns], name), default)
|
||||
def expand(name):
|
||||
ns, tag = name.partition(':')[0::2]
|
||||
if ns:
|
||||
tag = '{%s}%s' % (namespaces[ns], tag)
|
||||
return tag
|
||||
|
||||
def get(x, attr, default=None):
|
||||
return x.attrib.get(expand(attr), default)
|
||||
|
||||
def ancestor(elem, name):
|
||||
tag = expand(name)
|
||||
while elem is not None:
|
||||
elem = elem.getparent()
|
||||
if getattr(elem, 'tag', None) == tag:
|
||||
return elem
|
||||
|
||||
def generate_anchor(name, existing):
|
||||
x = y = 'id_' + re.sub(r'[^0-9a-zA-Z_]', '', ascii_text(name)).lstrip('_')
|
||||
c = 1
|
||||
while y in existing:
|
||||
y = '%s_%d' % (x, c)
|
||||
c += 1
|
||||
return y
|
||||
|
||||
def children(elem, *args):
|
||||
return elem.iterchildren(*map(expand, args))
|
||||
|
||||
def descendants(elem, *args):
|
||||
return elem.iterdescendants(*map(expand, args))
|
||||
|
@ -13,6 +13,38 @@ from calibre.ebooks.docx.block_styles import ParagraphStyle, inherit
|
||||
from calibre.ebooks.docx.char_styles import RunStyle
|
||||
from calibre.ebooks.docx.names import XPath, get
|
||||
|
||||
class PageProperties(object):
|
||||
|
||||
'''
|
||||
Class representing page level properties (page size/margins) read from
|
||||
sectPr elements.
|
||||
'''
|
||||
|
||||
def __init__(self, elems=()):
|
||||
self.width = self.height = 595.28, 841.89 # pts, A4
|
||||
self.margin_left = self.margin_right = 72 # pts
|
||||
for sectPr in elems:
|
||||
for pgSz in XPath('./w:pgSz')(sectPr):
|
||||
w, h = get(pgSz, 'w:w'), get(pgSz, 'w:h')
|
||||
try:
|
||||
self.width = int(w)/20
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
try:
|
||||
self.height = int(h)/20
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
for pgMar in XPath('./w:pgMar')(sectPr):
|
||||
l, r = get(pgMar, 'w:left'), get(pgMar, 'w:right')
|
||||
try:
|
||||
self.margin_left = int(l)/20
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
try:
|
||||
self.margin_right = int(r)/20
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
|
||||
class Style(object):
|
||||
'''
|
||||
@ -352,6 +384,16 @@ class Styles(object):
|
||||
p { text-indent: 1.5em }
|
||||
|
||||
ul, ol, p { margin: 0; padding: 0 }
|
||||
|
||||
sup.noteref a { text-decoration: none }
|
||||
|
||||
h1.notes-header { page-break-before: always }
|
||||
|
||||
dl.notes dt { font-size: large }
|
||||
|
||||
dl.notes dt a { text-decoration: none }
|
||||
|
||||
dl.notes dd { page-break-after: always }
|
||||
''') % (self.body_font_family, self.body_font_size)
|
||||
if ef:
|
||||
prefix = ef + '\n' + prefix
|
||||
|
@ -7,17 +7,22 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
|
||||
import sys, os, re
|
||||
from collections import OrderedDict
|
||||
from collections import OrderedDict, defaultdict
|
||||
|
||||
from lxml import html
|
||||
from lxml.html.builder import (
|
||||
HTML, HEAD, TITLE, BODY, LINK, META, P, SPAN, BR)
|
||||
HTML, HEAD, TITLE, BODY, LINK, META, P, SPAN, BR, DIV, SUP, A, DT, DL, DD, H1)
|
||||
|
||||
from calibre.ebooks.docx.container import DOCX, fromstring
|
||||
from calibre.ebooks.docx.names import XPath, is_tag, XML, STYLES, NUMBERING, FONTS
|
||||
from calibre.ebooks.docx.styles import Styles, inherit
|
||||
from calibre.ebooks.docx.names import (
|
||||
XPath, is_tag, XML, STYLES, NUMBERING, FONTS, get, generate_anchor,
|
||||
descendants, ancestor, FOOTNOTES, ENDNOTES)
|
||||
from calibre.ebooks.docx.styles import Styles, inherit, PageProperties
|
||||
from calibre.ebooks.docx.numbering import Numbering
|
||||
from calibre.ebooks.docx.fonts import Fonts
|
||||
from calibre.ebooks.docx.images import Images
|
||||
from calibre.ebooks.docx.footnotes import Footnotes
|
||||
from calibre.ebooks.metadata.opf2 import OPFCreator
|
||||
from calibre.utils.localization import canonicalize_lang, lang_as_iso639_1
|
||||
|
||||
class Text:
|
||||
@ -31,13 +36,15 @@ class Text:
|
||||
|
||||
class Convert(object):
|
||||
|
||||
def __init__(self, path_or_stream, dest_dir=None, log=None):
|
||||
def __init__(self, path_or_stream, dest_dir=None, log=None, notes_text=None):
|
||||
self.docx = DOCX(path_or_stream, log=log)
|
||||
self.log = self.docx.log
|
||||
self.notes_text = notes_text or _('Notes')
|
||||
self.dest_dir = dest_dir or os.getcwdu()
|
||||
self.mi = self.docx.metadata
|
||||
self.body = BODY()
|
||||
self.styles = Styles()
|
||||
self.images = Images()
|
||||
self.object_map = OrderedDict()
|
||||
self.html = HTML(
|
||||
HEAD(
|
||||
@ -64,12 +71,37 @@ class Convert(object):
|
||||
doc = self.docx.document
|
||||
relationships_by_id, relationships_by_type = self.docx.document_relationships
|
||||
self.read_styles(relationships_by_type)
|
||||
self.images(relationships_by_id)
|
||||
self.layers = OrderedDict()
|
||||
for wp in XPath('//w:p')(doc):
|
||||
self.framed = [[]]
|
||||
self.framed_map = {}
|
||||
self.anchor_map = {}
|
||||
self.link_map = defaultdict(list)
|
||||
|
||||
self.read_page_properties(doc)
|
||||
for wp, page_properties in self.page_map.iteritems():
|
||||
self.current_page = page_properties
|
||||
p = self.convert_p(wp)
|
||||
self.body.append(p)
|
||||
|
||||
notes_header = None
|
||||
if self.footnotes.has_notes:
|
||||
dl = DL()
|
||||
dl.set('class', 'notes')
|
||||
self.body.append(H1(self.notes_text))
|
||||
notes_header = self.body[-1]
|
||||
notes_header.set('class', 'notes-header')
|
||||
self.body.append(dl)
|
||||
for anchor, text, note in self.footnotes:
|
||||
dl.append(DT('[', A('←' + text, href='#back_%s' % anchor, title=text), id=anchor))
|
||||
dl[-1][0].tail = ']'
|
||||
dl.append(DD())
|
||||
for wp in note:
|
||||
p = self.convert_p(wp)
|
||||
dl[-1].append(p)
|
||||
|
||||
self.resolve_links(relationships_by_id)
|
||||
# TODO: tables <w:tbl> child of <w:body> (nested tables?)
|
||||
# TODO: Last section properties <w:sectPr> child of <w:body>
|
||||
|
||||
self.styles.cascade(self.layers)
|
||||
|
||||
@ -84,6 +116,7 @@ class Convert(object):
|
||||
lvl = 0
|
||||
numbered.append((html_obj, num_id, lvl))
|
||||
self.numbering.apply_markup(numbered, self.body, self.styles, self.object_map)
|
||||
self.apply_frames()
|
||||
|
||||
if len(self.body) > 0:
|
||||
self.body.text = '\n\t'
|
||||
@ -100,7 +133,39 @@ class Convert(object):
|
||||
cls = self.styles.class_name(css)
|
||||
if cls:
|
||||
html_obj.set('class', cls)
|
||||
self.write()
|
||||
for html_obj, css in self.framed_map.iteritems():
|
||||
cls = self.styles.class_name(css)
|
||||
if cls:
|
||||
html_obj.set('class', cls)
|
||||
|
||||
if notes_header is not None:
|
||||
for h in self.body.iterchildren('h1', 'h2', 'h3'):
|
||||
notes_header.tag = h.tag
|
||||
cls = h.get('class', None)
|
||||
if cls and cls != 'notes-header':
|
||||
notes_header.set('class', '%s notes-header' % cls)
|
||||
break
|
||||
|
||||
return self.write()
|
||||
|
||||
def read_page_properties(self, doc):
|
||||
current = []
|
||||
self.page_map = OrderedDict()
|
||||
|
||||
for p in descendants(doc, 'w:p'):
|
||||
sect = tuple(descendants(p, 'w:sectPr'))
|
||||
if sect:
|
||||
pr = PageProperties(sect)
|
||||
for x in current + [p]:
|
||||
self.page_map[x] = pr
|
||||
current = []
|
||||
else:
|
||||
current.append(p)
|
||||
if current:
|
||||
last = XPath('./w:body/w:sectPr')(doc)
|
||||
pr = PageProperties(last)
|
||||
for x in current:
|
||||
self.page_map[x] = pr
|
||||
|
||||
def read_styles(self, relationships_by_type):
|
||||
|
||||
@ -109,16 +174,32 @@ class Convert(object):
|
||||
if name is None:
|
||||
cname = self.docx.document_name.split('/')
|
||||
cname[-1] = defname
|
||||
if self.docx.exists(cname):
|
||||
if self.docx.exists('/'.join(cname)):
|
||||
name = name
|
||||
return name
|
||||
|
||||
nname = get_name(NUMBERING, 'numbering.xml')
|
||||
sname = get_name(STYLES, 'styles.xml')
|
||||
fname = get_name(FONTS, 'fontTable.xml')
|
||||
foname = get_name(FOOTNOTES, 'footnotes.xml')
|
||||
enname = get_name(ENDNOTES, 'endnotes.xml')
|
||||
numbering = self.numbering = Numbering()
|
||||
footnotes = self.footnotes = Footnotes()
|
||||
fonts = self.fonts = Fonts()
|
||||
|
||||
foraw = enraw = None
|
||||
if foname is not None:
|
||||
try:
|
||||
foraw = self.docx.read(foname)
|
||||
except KeyError:
|
||||
self.log.warn('Footnotes %s do not exist' % foname)
|
||||
if enname is not None:
|
||||
try:
|
||||
enraw = self.docx.read(enname)
|
||||
except KeyError:
|
||||
self.log.warn('Endnotes %s do not exist' % enname)
|
||||
footnotes(fromstring(foraw) if foraw else None, fromstring(enraw) if enraw else None)
|
||||
|
||||
if fname is not None:
|
||||
embed_relationships = self.docx.get_relationships(fname)[0]
|
||||
try:
|
||||
@ -155,15 +236,43 @@ class Convert(object):
|
||||
with open(os.path.join(self.dest_dir, 'docx.css'), 'wb') as f:
|
||||
f.write(css.encode('utf-8'))
|
||||
|
||||
opf = OPFCreator(self.dest_dir, self.mi)
|
||||
opf.create_manifest_from_files_in([self.dest_dir])
|
||||
opf.create_spine(['index.html'])
|
||||
with open(os.path.join(self.dest_dir, 'metadata.opf'), 'wb') as of, open(os.path.join(self.dest_dir, 'toc.ncx'), 'wb') as ncx:
|
||||
opf.render(of, ncx, 'toc.ncx')
|
||||
return os.path.join(self.dest_dir, 'metadata.opf')
|
||||
|
||||
def convert_p(self, p):
|
||||
dest = P()
|
||||
self.object_map[dest] = p
|
||||
style = self.styles.resolve_paragraph(p)
|
||||
self.layers[p] = []
|
||||
for run in XPath('descendant::w:r')(p):
|
||||
span = self.convert_run(run)
|
||||
dest.append(span)
|
||||
self.layers[p].append(run)
|
||||
self.add_frame(dest, style.frame)
|
||||
|
||||
current_anchor = None
|
||||
current_hyperlink = None
|
||||
|
||||
for x in descendants(p, 'w:r', 'w:bookmarkStart', 'w:hyperlink'):
|
||||
if x.tag.endswith('}r'):
|
||||
span = self.convert_run(x)
|
||||
if current_anchor is not None:
|
||||
(dest if len(dest) == 0 else span).set('id', current_anchor)
|
||||
current_anchor = None
|
||||
if current_hyperlink is not None:
|
||||
hl = ancestor(x, 'w:hyperlink')
|
||||
if hl is not None:
|
||||
self.link_map[hl].append(span)
|
||||
else:
|
||||
current_hyperlink = None
|
||||
dest.append(span)
|
||||
self.layers[p].append(x)
|
||||
elif x.tag.endswith('}bookmarkStart'):
|
||||
anchor = get(x, 'w:name')
|
||||
if anchor and anchor not in self.anchor_map:
|
||||
self.anchor_map[anchor] = current_anchor = generate_anchor(anchor, frozenset(self.anchor_map.itervalues()))
|
||||
elif x.tag.endswith('}hyperlink'):
|
||||
current_hyperlink = x
|
||||
|
||||
m = re.match(r'heading\s+(\d+)$', style.style_name or '', re.IGNORECASE)
|
||||
if m is not None:
|
||||
@ -208,6 +317,31 @@ class Convert(object):
|
||||
for elem in elems:
|
||||
p.remove(elem)
|
||||
wrapper.append(elem)
|
||||
return wrapper
|
||||
|
||||
def resolve_links(self, relationships_by_id):
|
||||
for hyperlink, spans in self.link_map.iteritems():
|
||||
span = spans[0]
|
||||
if len(spans) > 1:
|
||||
span = self.wrap_elems(spans, SPAN())
|
||||
span.tag = 'a'
|
||||
tgt = get(hyperlink, 'w:tgtFrame')
|
||||
if tgt:
|
||||
span.set('target', tgt)
|
||||
tt = get(hyperlink, 'w:tooltip')
|
||||
if tt:
|
||||
span.set('title', tt)
|
||||
rid = get(hyperlink, 'r:id')
|
||||
if rid and rid in relationships_by_id:
|
||||
span.set('href', relationships_by_id[rid])
|
||||
continue
|
||||
anchor = get(hyperlink, 'w:anchor')
|
||||
if anchor and anchor in self.anchor_map:
|
||||
span.set('href', '#' + self.anchor_map[anchor])
|
||||
continue
|
||||
self.log.warn('Hyperlink with unknown target (%s, %s), ignoring' %
|
||||
(rid, anchor))
|
||||
span.set('href', '#')
|
||||
|
||||
def convert_run(self, run):
|
||||
ans = SPAN()
|
||||
@ -239,6 +373,17 @@ class Convert(object):
|
||||
br = BR()
|
||||
text.add_elem(br)
|
||||
ans.append(text.elem)
|
||||
elif is_tag(child, 'w:drawing') or is_tag(child, 'w:pict'):
|
||||
for img in self.images.to_html(child, self.current_page, self.docx, self.dest_dir):
|
||||
text.add_elem(img)
|
||||
ans.append(text.elem)
|
||||
elif is_tag(child, 'w:footnoteReference') or is_tag(child, 'w:endnoteReference'):
|
||||
anchor, name = self.footnotes.get_ref(child)
|
||||
if anchor and name:
|
||||
l = SUP(A(name, href='#' + anchor, title=name), id='back_%s' % anchor)
|
||||
l.set('class', 'noteref')
|
||||
text.add_elem(l)
|
||||
ans.append(text.elem)
|
||||
if text.buf:
|
||||
setattr(text.elem, text.attr, ''.join(text.buf))
|
||||
|
||||
@ -249,7 +394,39 @@ class Convert(object):
|
||||
ans.lang = style.lang
|
||||
return ans
|
||||
|
||||
def add_frame(self, html_obj, style):
|
||||
last_run = self.framed[-1]
|
||||
if style is inherit:
|
||||
if last_run:
|
||||
self.framed.append([])
|
||||
return
|
||||
|
||||
if last_run:
|
||||
if last_run[-1][1] == style:
|
||||
last_run.append((html_obj, style))
|
||||
else:
|
||||
self.framed.append((html_obj, style))
|
||||
else:
|
||||
last_run.append((html_obj, style))
|
||||
|
||||
def apply_frames(self):
|
||||
for run in filter(None, self.framed):
|
||||
style = run[0][1]
|
||||
paras = tuple(x[0] for x in run)
|
||||
parent = paras[0].getparent()
|
||||
idx = parent.index(paras[0])
|
||||
frame = DIV(*paras)
|
||||
parent.insert(idx, frame)
|
||||
self.framed_map[frame] = css = style.css(self.page_map[self.object_map[paras[0]]])
|
||||
self.styles.register(css, 'frame')
|
||||
|
||||
if __name__ == '__main__':
|
||||
import shutil
|
||||
from calibre.utils.logging import default_log
|
||||
default_log.filter_level = default_log.DEBUG
|
||||
Convert(sys.argv[-1], log=default_log)()
|
||||
dest_dir = os.path.join(os.getcwdu(), 'docx_input')
|
||||
if os.path.exists(dest_dir):
|
||||
shutil.rmtree(dest_dir)
|
||||
os.mkdir(dest_dir)
|
||||
Convert(sys.argv[-1], dest_dir=dest_dir, log=default_log)()
|
||||
|
||||
|
@ -179,7 +179,7 @@ class Metadata(object):
|
||||
|
||||
def deepcopy(self):
|
||||
''' Do not use this method unless you know what you are doing, if you want to create a simple clone of
|
||||
this object, use :method:`deepcopy_metadata` instead. '''
|
||||
this object, use :meth:`deepcopy_metadata` instead. '''
|
||||
m = Metadata(None)
|
||||
m.__dict__ = copy.deepcopy(self.__dict__)
|
||||
object.__setattr__(m, '_data', copy.deepcopy(object.__getattribute__(self, '_data')))
|
||||
|
@ -21,7 +21,7 @@ from calibre.ebooks.metadata.book.base import Metadata
|
||||
from calibre.utils.date import parse_date, isoformat
|
||||
from calibre.utils.localization import get_lang, canonicalize_lang
|
||||
from calibre import prints, guess_type
|
||||
from calibre.utils.cleantext import clean_ascii_chars
|
||||
from calibre.utils.cleantext import clean_ascii_chars, clean_xml_chars
|
||||
from calibre.utils.config import tweaks
|
||||
|
||||
class Resource(object): # {{{
|
||||
@ -560,7 +560,9 @@ class OPF(object): # {{{
|
||||
self.package_version = 0
|
||||
self.metadata = self.metadata_path(self.root)
|
||||
if not self.metadata:
|
||||
raise ValueError('Malformed OPF file: No <metadata> element')
|
||||
self.metadata = [self.root.makeelement('{http://www.idpf.org/2007/opf}metadata')]
|
||||
self.root.insert(0, self.metadata[0])
|
||||
self.metadata[0].tail = '\n'
|
||||
self.metadata = self.metadata[0]
|
||||
if unquote_urls:
|
||||
self.unquote_urls()
|
||||
@ -1434,7 +1436,10 @@ def metadata_to_opf(mi, as_string=True, default_lang=None):
|
||||
attrib['name'] = name
|
||||
if content:
|
||||
attrib['content'] = content
|
||||
elem = metadata.makeelement(tag, attrib=attrib)
|
||||
try:
|
||||
elem = metadata.makeelement(tag, attrib=attrib)
|
||||
except ValueError:
|
||||
elem = metadata.makeelement(tag, attrib={k:clean_xml_chars(v) for k, v in attrib.iteritems()})
|
||||
elem.tail = '\n'+(' '*8)
|
||||
if text:
|
||||
try:
|
||||
|
@ -100,7 +100,7 @@ def update_flow_links(mobi8_reader, resource_map, log):
|
||||
mr = mobi8_reader
|
||||
flows = []
|
||||
|
||||
img_pattern = re.compile(r'''(<[img\s|image\s][^>]*>)''', re.IGNORECASE)
|
||||
img_pattern = re.compile(r'''(<[img\s|image\s|svg:image\s][^>]*>)''', re.IGNORECASE)
|
||||
img_index_pattern = re.compile(r'''['"]kindle:embed:([0-9|A-V]+)[^'"]*['"]''', re.IGNORECASE)
|
||||
|
||||
tag_pattern = re.compile(r'''(<[^>]*>)''')
|
||||
@ -128,7 +128,7 @@ def update_flow_links(mobi8_reader, resource_map, log):
|
||||
srcpieces = img_pattern.split(flow)
|
||||
for j in range(1, len(srcpieces), 2):
|
||||
tag = srcpieces[j]
|
||||
if tag.startswith('<im'):
|
||||
if tag.startswith('<im') or tag.startswith('<svg:image'):
|
||||
for m in img_index_pattern.finditer(tag):
|
||||
num = int(m.group(1), 32)
|
||||
href = resource_map[num-1]
|
||||
|
@ -228,7 +228,7 @@ class Mobi8Reader(object):
|
||||
|
||||
self.flowinfo.append(FlowInfo(None, None, None, None))
|
||||
svg_tag_pattern = re.compile(br'''(<svg[^>]*>)''', re.IGNORECASE)
|
||||
image_tag_pattern = re.compile(br'''(<image[^>]*>)''', re.IGNORECASE)
|
||||
image_tag_pattern = re.compile(br'''(<(?:svg:)?image[^>]*>)''', re.IGNORECASE)
|
||||
for j in xrange(1, len(self.flows)):
|
||||
flowpart = self.flows[j]
|
||||
nstr = '%04d' % j
|
||||
@ -243,7 +243,7 @@ class Mobi8Reader(object):
|
||||
dir = None
|
||||
fname = None
|
||||
# strip off anything before <svg if inlining
|
||||
flowpart = flowpart[start:]
|
||||
flowpart = re.sub(br'(</?)svg:', r'\1', flowpart[start:])
|
||||
else:
|
||||
format = 'file'
|
||||
dir = "images"
|
||||
|
@ -373,7 +373,7 @@ def urlquote(href):
|
||||
result.append(char)
|
||||
return ''.join(result)
|
||||
|
||||
def urlunquote(href):
|
||||
def urlunquote(href, error_handling='strict'):
|
||||
# unquote must run on a bytestring and will return a bytestring
|
||||
# If it runs on a unicode object, it returns a double encoded unicode
|
||||
# string: unquote(u'%C3%A4') != unquote(b'%C3%A4').decode('utf-8')
|
||||
@ -383,7 +383,10 @@ def urlunquote(href):
|
||||
href = href.encode('utf-8')
|
||||
href = unquote(href)
|
||||
if want_unicode:
|
||||
href = href.decode('utf-8')
|
||||
# The quoted characters could have been in some encoding other than
|
||||
# UTF-8, this often happens with old/broken web servers. There is no
|
||||
# way to know what that encoding should be in this context.
|
||||
href = href.decode('utf-8', error_handling)
|
||||
return href
|
||||
|
||||
def urlnormalize(href):
|
||||
|
@ -11,7 +11,7 @@ import re
|
||||
|
||||
from calibre import guess_type
|
||||
|
||||
class EntityDeclarationProcessor(object): # {{{
|
||||
class EntityDeclarationProcessor(object): # {{{
|
||||
|
||||
def __init__(self, html):
|
||||
self.declared_entities = {}
|
||||
@ -51,7 +51,7 @@ def load_html(path, view, codec='utf-8', mime_type=None,
|
||||
loading_url = QUrl.fromLocalFile(path)
|
||||
pre_load_callback(loading_url)
|
||||
|
||||
if force_as_html or re.search(r'<[:a-zA-Z0-9-]*svg', html) is None:
|
||||
if force_as_html or re.search(r'<[a-zA-Z0-9-]+:svg', html) is None:
|
||||
view.setHtml(html, loading_url)
|
||||
else:
|
||||
view.setContent(QByteArray(html.encode(codec)), mime_type,
|
||||
@ -61,4 +61,3 @@ def load_html(path, view, codec='utf-8', mime_type=None,
|
||||
if not elem.isNull():
|
||||
return False
|
||||
return True
|
||||
|
||||
|
@ -32,7 +32,8 @@ def dynamic_rescale_factor(node):
|
||||
classes = node.get('class', '').split(' ')
|
||||
classes = [x.replace('calibre_rescale_', '') for x in classes if
|
||||
x.startswith('calibre_rescale_')]
|
||||
if not classes: return None
|
||||
if not classes:
|
||||
return None
|
||||
factor = 1.0
|
||||
for x in classes:
|
||||
try:
|
||||
@ -54,7 +55,8 @@ class KeyMapper(object):
|
||||
return base
|
||||
size = float(size)
|
||||
base = float(base)
|
||||
if abs(size - base) < 0.1: return 0
|
||||
if abs(size - base) < 0.1:
|
||||
return 0
|
||||
sign = -1 if size < base else 1
|
||||
endp = 0 if size < base else 36
|
||||
diff = (abs(base - size) * 3) + ((36 - size) / 100)
|
||||
@ -110,7 +112,8 @@ class EmbedFontsCSSRules(object):
|
||||
self.href = None
|
||||
|
||||
def __call__(self, oeb):
|
||||
if not self.body_font_family: return None
|
||||
if not self.body_font_family:
|
||||
return None
|
||||
if not self.href:
|
||||
iid, href = oeb.manifest.generate(u'page_styles', u'page_styles.css')
|
||||
rules = [x.cssText for x in self.rules]
|
||||
@ -228,10 +231,10 @@ class CSSFlattener(object):
|
||||
bs.append('margin-top: 0pt')
|
||||
bs.append('margin-bottom: 0pt')
|
||||
if float(self.context.margin_left) >= 0:
|
||||
bs.append('margin-left : %gpt'%\
|
||||
bs.append('margin-left : %gpt'%
|
||||
float(self.context.margin_left))
|
||||
if float(self.context.margin_right) >= 0:
|
||||
bs.append('margin-right : %gpt'%\
|
||||
bs.append('margin-right : %gpt'%
|
||||
float(self.context.margin_right))
|
||||
bs.extend(['padding-left: 0pt', 'padding-right: 0pt'])
|
||||
if self.page_break_on_body:
|
||||
@ -277,8 +280,10 @@ class CSSFlattener(object):
|
||||
for kind in ('margin', 'padding'):
|
||||
for edge in ('bottom', 'top'):
|
||||
property = "%s-%s" % (kind, edge)
|
||||
if property not in cssdict: continue
|
||||
if '%' in cssdict[property]: continue
|
||||
if property not in cssdict:
|
||||
continue
|
||||
if '%' in cssdict[property]:
|
||||
continue
|
||||
value = style[property]
|
||||
if value == 0:
|
||||
continue
|
||||
@ -296,7 +301,7 @@ class CSSFlattener(object):
|
||||
def flatten_node(self, node, stylizer, names, styles, pseudo_styles, psize, item_id):
|
||||
if not isinstance(node.tag, basestring) \
|
||||
or namespace(node.tag) != XHTML_NS:
|
||||
return
|
||||
return
|
||||
tag = barename(node.tag)
|
||||
style = stylizer.style(node)
|
||||
cssdict = style.cssdict()
|
||||
@ -360,12 +365,17 @@ class CSSFlattener(object):
|
||||
pass
|
||||
del node.attrib['bgcolor']
|
||||
if cssdict.get('font-weight', '').lower() == 'medium':
|
||||
cssdict['font-weight'] = 'normal' # ADE chokes on font-weight medium
|
||||
cssdict['font-weight'] = 'normal' # ADE chokes on font-weight medium
|
||||
|
||||
fsize = font_size
|
||||
is_drop_cap = (cssdict.get('float', None) == 'left' and 'font-size' in
|
||||
cssdict and len(node) == 0 and node.text and
|
||||
len(node.text) == 1)
|
||||
is_drop_cap = is_drop_cap or (
|
||||
# The docx input plugin generates drop caps that look like this
|
||||
len(node) == 1 and not node.text and len(node[0]) == 0 and
|
||||
node[0].text and not node[0].tail and len(node[0].text) == 1 and
|
||||
'line-height' in cssdict and 'font-size' in cssdict)
|
||||
if not self.context.disable_font_rescaling and not is_drop_cap:
|
||||
_sbase = self.sbase if self.sbase is not None else \
|
||||
self.context.source.fbase
|
||||
@ -436,8 +446,7 @@ class CSSFlattener(object):
|
||||
keep_classes = set()
|
||||
|
||||
if cssdict:
|
||||
items = cssdict.items()
|
||||
items.sort()
|
||||
items = sorted(cssdict.items())
|
||||
css = u';\n'.join(u'%s: %s' % (key, val) for key, val in items)
|
||||
classes = node.get('class', '').strip() or 'calibre'
|
||||
klass = ascii_text(STRIPNUM.sub('', classes.split()[0].replace('_', '')))
|
||||
@ -519,8 +528,7 @@ class CSSFlattener(object):
|
||||
if float(self.context.margin_bottom) >= 0:
|
||||
stylizer.page_rule['margin-bottom'] = '%gpt'%\
|
||||
float(self.context.margin_bottom)
|
||||
items = stylizer.page_rule.items()
|
||||
items.sort()
|
||||
items = sorted(stylizer.page_rule.items())
|
||||
css = ';\n'.join("%s: %s" % (key, val) for key, val in items)
|
||||
css = ('@page {\n%s\n}\n'%css) if items else ''
|
||||
rules = [r.cssText for r in stylizer.font_face_rules +
|
||||
@ -556,14 +564,14 @@ class CSSFlattener(object):
|
||||
body = html.find(XHTML('body'))
|
||||
fsize = self.context.dest.fbase
|
||||
self.flatten_node(body, stylizer, names, styles, pseudo_styles, fsize, item.id)
|
||||
items = [(key, val) for (val, key) in styles.items()]
|
||||
items.sort()
|
||||
items = sorted([(key, val) for (val, key) in styles.items()])
|
||||
# :hover must come after link and :active must come after :hover
|
||||
psels = sorted(pseudo_styles.iterkeys(), key=lambda x :
|
||||
{'hover':1, 'active':2}.get(x, 0))
|
||||
for psel in psels:
|
||||
styles = pseudo_styles[psel]
|
||||
if not styles: continue
|
||||
if not styles:
|
||||
continue
|
||||
x = sorted(((k+':'+psel, v) for v, k in styles.iteritems()))
|
||||
items.extend(x)
|
||||
|
||||
|
@ -113,7 +113,7 @@ class Split(object):
|
||||
for i, elem in enumerate(item.data.iter()):
|
||||
try:
|
||||
elem.set('pb_order', str(i))
|
||||
except TypeError: # Cant set attributes on comment nodes etc.
|
||||
except TypeError: # Cant set attributes on comment nodes etc.
|
||||
continue
|
||||
|
||||
page_breaks = list(page_breaks)
|
||||
@ -159,7 +159,11 @@ class Split(object):
|
||||
except ValueError:
|
||||
# Unparseable URL
|
||||
return url
|
||||
href = urlnormalize(href)
|
||||
try:
|
||||
href = urlnormalize(href)
|
||||
except ValueError:
|
||||
# href has non utf-8 quoting
|
||||
return url
|
||||
if href in self.map:
|
||||
anchor_map = self.map[href]
|
||||
nhref = anchor_map[frag if frag else None]
|
||||
@ -171,7 +175,6 @@ class Split(object):
|
||||
return url
|
||||
|
||||
|
||||
|
||||
class FlowSplitter(object):
|
||||
'The actual splitting logic'
|
||||
|
||||
@ -313,7 +316,6 @@ class FlowSplitter(object):
|
||||
split_point = root.xpath(path)[0]
|
||||
split_point2 = root2.xpath(path)[0]
|
||||
|
||||
|
||||
def nix_element(elem, top=True):
|
||||
# Remove elem unless top is False in which case replace elem by its
|
||||
# children
|
||||
@ -373,6 +375,8 @@ class FlowSplitter(object):
|
||||
for img in root.xpath('//h:img', namespaces=NAMESPACES):
|
||||
if img.get('style', '') != 'display:none':
|
||||
return False
|
||||
if root.xpath('//*[local-name() = "svg"]'):
|
||||
return False
|
||||
return True
|
||||
|
||||
def split_text(self, text, root, size):
|
||||
@ -393,7 +397,6 @@ class FlowSplitter(object):
|
||||
buf = part
|
||||
return ans
|
||||
|
||||
|
||||
def split_to_size(self, tree):
|
||||
self.log.debug('\t\tSplitting...')
|
||||
root = tree.getroot()
|
||||
@ -440,7 +443,7 @@ class FlowSplitter(object):
|
||||
len(self.split_trees), size/1024.))
|
||||
else:
|
||||
self.log.debug(
|
||||
'\t\t\tSplit tree still too large: %d KB' % \
|
||||
'\t\t\tSplit tree still too large: %d KB' %
|
||||
(size/1024.))
|
||||
self.split_to_size(t)
|
||||
|
||||
@ -546,7 +549,6 @@ class FlowSplitter(object):
|
||||
for x in toc:
|
||||
fix_toc_entry(x)
|
||||
|
||||
|
||||
if self.oeb.toc:
|
||||
fix_toc_entry(self.oeb.toc)
|
||||
|
||||
|
@ -22,7 +22,7 @@ from calibre.gui2 import (gprefs, warning_dialog, Dispatcher, error_dialog,
|
||||
from calibre.library.database2 import LibraryDatabase2
|
||||
from calibre.gui2.actions import InterfaceAction
|
||||
|
||||
class LibraryUsageStats(object): # {{{
|
||||
class LibraryUsageStats(object): # {{{
|
||||
|
||||
def __init__(self):
|
||||
self.stats = {}
|
||||
@ -92,7 +92,7 @@ class LibraryUsageStats(object): # {{{
|
||||
self.write_stats()
|
||||
# }}}
|
||||
|
||||
class MovedDialog(QDialog): # {{{
|
||||
class MovedDialog(QDialog): # {{{
|
||||
|
||||
def __init__(self, stats, location, parent=None):
|
||||
QDialog.__init__(self, parent)
|
||||
@ -161,13 +161,15 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
def genesis(self):
|
||||
self.base_text = _('%d books')
|
||||
self.count_changed(0)
|
||||
self.qaction.triggered.connect(self.choose_library,
|
||||
type=Qt.QueuedConnection)
|
||||
self.action_choose = self.menuless_qaction
|
||||
|
||||
self.stats = LibraryUsageStats()
|
||||
self.popup_type = (QToolButton.InstantPopup if len(self.stats.stats) > 1 else
|
||||
QToolButton.MenuButtonPopup)
|
||||
if len(self.stats.stats) > 1:
|
||||
self.action_choose.triggered.connect(self.choose_library)
|
||||
else:
|
||||
self.qaction.triggered.connect(self.choose_library)
|
||||
|
||||
self.choose_menu = self.qaction.menu()
|
||||
|
||||
@ -200,7 +202,6 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
type=Qt.QueuedConnection)
|
||||
self.choose_menu.addAction(ac)
|
||||
|
||||
|
||||
self.rename_separator = self.choose_menu.addSeparator()
|
||||
|
||||
self.maintenance_menu = QMenu(_('Library Maintenance'))
|
||||
@ -477,19 +478,20 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
else:
|
||||
return
|
||||
|
||||
#from calibre.utils.mem import memory
|
||||
#import weakref
|
||||
#from PyQt4.Qt import QTimer
|
||||
#self.dbref = weakref.ref(self.gui.library_view.model().db)
|
||||
#self.before_mem = memory()/1024**2
|
||||
# from calibre.utils.mem import memory
|
||||
# import weakref
|
||||
# from PyQt4.Qt import QTimer
|
||||
# self.dbref = weakref.ref(self.gui.library_view.model().db)
|
||||
# self.before_mem = memory()/1024**2
|
||||
self.gui.library_moved(loc, allow_rebuild=True)
|
||||
#QTimer.singleShot(5000, self.debug_leak)
|
||||
# QTimer.singleShot(5000, self.debug_leak)
|
||||
|
||||
def debug_leak(self):
|
||||
import gc
|
||||
from calibre.utils.mem import memory
|
||||
ref = self.dbref
|
||||
for i in xrange(3): gc.collect()
|
||||
for i in xrange(3):
|
||||
gc.collect()
|
||||
if ref() is not None:
|
||||
print 'DB object alive:', ref()
|
||||
for r in gc.get_referrers(ref())[:10]:
|
||||
@ -500,7 +502,6 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
print
|
||||
self.dbref = self.before_mem = None
|
||||
|
||||
|
||||
def qs_requested(self, idx, *args):
|
||||
self.switch_requested(self.qs_locations[idx])
|
||||
|
||||
@ -546,3 +547,4 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
@ -7,10 +7,10 @@ __docformat__ = 'restructuredtext en'
|
||||
|
||||
from functools import partial
|
||||
|
||||
from PyQt4.Qt import QComboBox, QLabel, QSpinBox, QDoubleSpinBox, QDateTimeEdit, \
|
||||
QDateTime, QGroupBox, QVBoxLayout, QSizePolicy, QGridLayout, \
|
||||
QSpacerItem, QIcon, QCheckBox, QWidget, QHBoxLayout, SIGNAL, \
|
||||
QPushButton, QMessageBox, QToolButton
|
||||
from PyQt4.Qt import (QComboBox, QLabel, QSpinBox, QDoubleSpinBox, QDateTimeEdit,
|
||||
QDateTime, QGroupBox, QVBoxLayout, QSizePolicy, QGridLayout,
|
||||
QSpacerItem, QIcon, QCheckBox, QWidget, QHBoxLayout, SIGNAL,
|
||||
QPushButton, QMessageBox, QToolButton, Qt)
|
||||
|
||||
from calibre.utils.date import qt_to_dt, now
|
||||
from calibre.gui2.complete2 import EditWithComplete
|
||||
@ -39,7 +39,6 @@ class Base(object):
|
||||
def gui_val(self):
|
||||
return self.getter()
|
||||
|
||||
|
||||
def commit(self, book_id, notify=False):
|
||||
val = self.gui_val
|
||||
val = self.normalize_ui_val(val)
|
||||
@ -159,6 +158,17 @@ class DateTimeEdit(QDateTimeEdit):
|
||||
def set_to_clear(self):
|
||||
self.setDateTime(UNDEFINED_QDATETIME)
|
||||
|
||||
def keyPressEvent(self, ev):
|
||||
if ev.key() == Qt.Key_Minus:
|
||||
ev.accept()
|
||||
self.setDateTime(self.minimumDateTime())
|
||||
elif ev.key() == Qt.Key_Equal:
|
||||
ev.accept()
|
||||
self.setDateTime(QDateTime.currentDateTime())
|
||||
else:
|
||||
return QDateTimeEdit.keyPressEvent(self, ev)
|
||||
|
||||
|
||||
class DateTime(Base):
|
||||
|
||||
def setup_ui(self, parent):
|
||||
@ -211,7 +221,7 @@ class Comments(Base):
|
||||
self._layout = QVBoxLayout()
|
||||
self._tb = CommentsEditor(self._box)
|
||||
self._tb.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Minimum)
|
||||
#self._tb.setTabChangesFocus(True)
|
||||
# self._tb.setTabChangesFocus(True)
|
||||
self._layout.addWidget(self._tb)
|
||||
self._box.setLayout(self._layout)
|
||||
self.widgets = [self._box]
|
||||
@ -534,7 +544,7 @@ def populate_metadata_page(layout, db, book_id, bulk=False, two_column=False, pa
|
||||
column = row = base_row = max_row = 0
|
||||
for key in cols:
|
||||
if not fm[key]['is_editable']:
|
||||
continue # this almost never happens
|
||||
continue # this almost never happens
|
||||
dt = fm[key]['datatype']
|
||||
if dt == 'composite' or (bulk and dt == 'comments'):
|
||||
continue
|
||||
@ -595,7 +605,6 @@ class BulkBase(Base):
|
||||
self._cached_gui_val_ = self.getter()
|
||||
return self._cached_gui_val_
|
||||
|
||||
|
||||
def get_initial_value(self, book_ids):
|
||||
values = set([])
|
||||
for book_id in book_ids:
|
||||
@ -633,7 +642,7 @@ class BulkBase(Base):
|
||||
self.main_widget = main_widget_class(w)
|
||||
l.addWidget(self.main_widget)
|
||||
l.setStretchFactor(self.main_widget, 10)
|
||||
self.a_c_checkbox = QCheckBox( _('Apply changes'), w)
|
||||
self.a_c_checkbox = QCheckBox(_('Apply changes'), w)
|
||||
l.addWidget(self.a_c_checkbox)
|
||||
self.ignore_change_signals = True
|
||||
|
||||
@ -1054,3 +1063,5 @@ bulk_widgets = {
|
||||
'series': BulkSeries,
|
||||
'enumeration': BulkEnumeration,
|
||||
}
|
||||
|
||||
|
||||
|
@ -27,7 +27,7 @@ def partial(*args, **kwargs):
|
||||
_keep_refs.append(ans)
|
||||
return ans
|
||||
|
||||
class LibraryViewMixin(object): # {{{
|
||||
class LibraryViewMixin(object): # {{{
|
||||
|
||||
def __init__(self, db):
|
||||
self.library_view.files_dropped.connect(self.iactions['Add Books'].files_dropped, type=Qt.QueuedConnection)
|
||||
@ -100,7 +100,7 @@ class LibraryViewMixin(object): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class LibraryWidget(Splitter): # {{{
|
||||
class LibraryWidget(Splitter): # {{{
|
||||
|
||||
def __init__(self, parent):
|
||||
orientation = Qt.Vertical
|
||||
@ -119,7 +119,7 @@ class LibraryWidget(Splitter): # {{{
|
||||
self.addWidget(parent.library_view)
|
||||
# }}}
|
||||
|
||||
class Stack(QStackedWidget): # {{{
|
||||
class Stack(QStackedWidget): # {{{
|
||||
|
||||
def __init__(self, parent):
|
||||
QStackedWidget.__init__(self, parent)
|
||||
@ -147,7 +147,7 @@ class Stack(QStackedWidget): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class UpdateLabel(QLabel): # {{{
|
||||
class UpdateLabel(QLabel): # {{{
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
QLabel.__init__(self, *args, **kwargs)
|
||||
@ -157,22 +157,22 @@ class UpdateLabel(QLabel): # {{{
|
||||
pass
|
||||
# }}}
|
||||
|
||||
class StatusBar(QStatusBar): # {{{
|
||||
class StatusBar(QStatusBar): # {{{
|
||||
|
||||
def __init__(self, parent=None):
|
||||
QStatusBar.__init__(self, parent)
|
||||
self.default_message = __appname__ + ' ' + _('version') + ' ' + \
|
||||
self.get_version() + ' ' + _('created by Kovid Goyal')
|
||||
self.device_string = ''
|
||||
self.update_label = UpdateLabel('')
|
||||
self.total = self.current = self.selected = 0
|
||||
self.addPermanentWidget(self.update_label)
|
||||
self.update_label.setVisible(False)
|
||||
self._font = QFont()
|
||||
self._font.setBold(True)
|
||||
self.setFont(self._font)
|
||||
self.defmsg = QLabel(self.default_message)
|
||||
self.defmsg = QLabel('')
|
||||
self.defmsg.setFont(self._font)
|
||||
self.addWidget(self.defmsg)
|
||||
self.set_label()
|
||||
|
||||
def initialize(self, systray=None):
|
||||
self.systray = systray
|
||||
@ -180,17 +180,39 @@ class StatusBar(QStatusBar): # {{{
|
||||
|
||||
def device_connected(self, devname):
|
||||
self.device_string = _('Connected ') + devname
|
||||
self.defmsg.setText(self.default_message + ' ..::.. ' +
|
||||
self.device_string)
|
||||
self.set_label()
|
||||
|
||||
def update_state(self, total, current, selected):
|
||||
self.total, self.current, self.selected = total, current, selected
|
||||
self.set_label()
|
||||
|
||||
def set_label(self):
|
||||
try:
|
||||
self._set_label()
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
def _set_label(self):
|
||||
msg = '%s %s %s' % (__appname__, _('version'), get_version())
|
||||
if self.device_string:
|
||||
msg += ' ..::.. ' + self.device_string
|
||||
else:
|
||||
msg += _(' %(created)s %(name)s') % dict(created=_('created by'), name='Kovid Goyal')
|
||||
|
||||
if self.total != self.current:
|
||||
base = _('%(num)d of %(total)d books') % dict(num=self.current, total=self.total)
|
||||
else:
|
||||
base = _('%d books') % self.total
|
||||
if self.selected > 0:
|
||||
base = _('%(num)s, %(sel)d selected') % dict(num=base, sel=self.selected)
|
||||
|
||||
self.defmsg.setText('%s [%s]' % (msg, base))
|
||||
self.clearMessage()
|
||||
|
||||
def device_disconnected(self):
|
||||
self.device_string = ''
|
||||
self.defmsg.setText(self.default_message)
|
||||
self.clearMessage()
|
||||
|
||||
def get_version(self):
|
||||
return get_version()
|
||||
self.set_label()
|
||||
|
||||
def show_message(self, msg, timeout=0):
|
||||
self.showMessage(msg, timeout)
|
||||
@ -207,11 +229,11 @@ class StatusBar(QStatusBar): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class LayoutMixin(object): # {{{
|
||||
class LayoutMixin(object): # {{{
|
||||
|
||||
def __init__(self):
|
||||
|
||||
if config['gui_layout'] == 'narrow': # narrow {{{
|
||||
if config['gui_layout'] == 'narrow': # narrow {{{
|
||||
self.book_details = BookDetails(False, self)
|
||||
self.stack = Stack(self)
|
||||
self.bd_splitter = Splitter('book_details_splitter',
|
||||
@ -224,7 +246,7 @@ class LayoutMixin(object): # {{{
|
||||
self.centralwidget.layout().addWidget(self.bd_splitter)
|
||||
button_order = ('tb', 'bd', 'cb')
|
||||
# }}}
|
||||
else: # wide {{{
|
||||
else: # wide {{{
|
||||
self.bd_splitter = Splitter('book_details_splitter',
|
||||
_('Book Details'), I('book.png'), initial_side_size=200,
|
||||
orientation=Qt.Horizontal, parent=self, side_index=1,
|
||||
@ -312,9 +334,15 @@ class LayoutMixin(object): # {{{
|
||||
|
||||
def read_layout_settings(self):
|
||||
# View states are restored automatically when set_database is called
|
||||
|
||||
for x in ('cb', 'tb', 'bd'):
|
||||
getattr(self, x+'_splitter').restore_state()
|
||||
|
||||
def update_status_bar(self, *args):
|
||||
v = self.current_view()
|
||||
selected = len(v.selectionModel().selectedRows())
|
||||
total, current = v.model().counts()
|
||||
self.status_bar.update_state(total, current, selected)
|
||||
|
||||
# }}}
|
||||
|
||||
|
||||
|
@ -9,7 +9,7 @@ import sys
|
||||
|
||||
from PyQt4.Qt import (Qt, QApplication, QStyle, QIcon, QDoubleSpinBox,
|
||||
QVariant, QSpinBox, QStyledItemDelegate, QComboBox, QTextDocument,
|
||||
QAbstractTextDocumentLayout, QFont, QFontInfo, QDate)
|
||||
QAbstractTextDocumentLayout, QFont, QFontInfo, QDate, QDateTimeEdit, QDateTime)
|
||||
|
||||
from calibre.gui2 import UNDEFINED_QDATETIME, error_dialog, rating_font
|
||||
from calibre.constants import iswindows
|
||||
@ -23,8 +23,28 @@ from calibre.gui2.dialogs.comments_dialog import CommentsDialog
|
||||
from calibre.gui2.dialogs.template_dialog import TemplateDialog
|
||||
from calibre.gui2.languages import LanguagesEdit
|
||||
|
||||
class DateTimeEdit(QDateTimeEdit): # {{{
|
||||
|
||||
class RatingDelegate(QStyledItemDelegate): # {{{
|
||||
def __init__(self, parent, format):
|
||||
QDateTimeEdit.__init__(self, parent)
|
||||
self.setFrame(False)
|
||||
self.setMinimumDateTime(UNDEFINED_QDATETIME)
|
||||
self.setSpecialValueText(_('Undefined'))
|
||||
self.setCalendarPopup(True)
|
||||
self.setDisplayFormat(format)
|
||||
|
||||
def keyPressEvent(self, ev):
|
||||
if ev.key() == Qt.Key_Minus:
|
||||
ev.accept()
|
||||
self.setDateTime(self.minimumDateTime())
|
||||
elif ev.key() == Qt.Key_Equal:
|
||||
ev.accept()
|
||||
self.setDateTime(QDateTime.currentDateTime())
|
||||
else:
|
||||
return QDateTimeEdit.keyPressEvent(self, ev)
|
||||
# }}}
|
||||
|
||||
class RatingDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
QStyledItemDelegate.__init__(self, *args, **kwargs)
|
||||
@ -60,7 +80,7 @@ class RatingDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class DateDelegate(QStyledItemDelegate): # {{{
|
||||
class DateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
def __init__(self, parent, tweak_name='gui_timestamp_display_format',
|
||||
default_format='dd MMM yyyy'):
|
||||
@ -77,16 +97,11 @@ class DateDelegate(QStyledItemDelegate): # {{{
|
||||
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
||||
|
||||
def createEditor(self, parent, option, index):
|
||||
qde = QStyledItemDelegate.createEditor(self, parent, option, index)
|
||||
qde.setDisplayFormat(self.format)
|
||||
qde.setMinimumDateTime(UNDEFINED_QDATETIME)
|
||||
qde.setSpecialValueText(_('Undefined'))
|
||||
qde.setCalendarPopup(True)
|
||||
return qde
|
||||
return DateTimeEdit(parent, self.format)
|
||||
|
||||
# }}}
|
||||
|
||||
class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||
class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
QStyledItemDelegate.__init__(self, *args, **kwargs)
|
||||
@ -101,12 +116,7 @@ class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
||||
|
||||
def createEditor(self, parent, option, index):
|
||||
qde = QStyledItemDelegate.createEditor(self, parent, option, index)
|
||||
qde.setDisplayFormat(self.format)
|
||||
qde.setMinimumDateTime(UNDEFINED_QDATETIME)
|
||||
qde.setSpecialValueText(_('Undefined'))
|
||||
qde.setCalendarPopup(True)
|
||||
return qde
|
||||
return DateTimeEdit(parent, self.format)
|
||||
|
||||
def setEditorData(self, editor, index):
|
||||
val = index.data(Qt.EditRole).toDate()
|
||||
@ -116,7 +126,7 @@ class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class TextDelegate(QStyledItemDelegate): # {{{
|
||||
class TextDelegate(QStyledItemDelegate): # {{{
|
||||
def __init__(self, parent):
|
||||
'''
|
||||
Delegate for text data. If auto_complete_function needs to return a list
|
||||
@ -153,7 +163,7 @@ class TextDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
#}}}
|
||||
|
||||
class CompleteDelegate(QStyledItemDelegate): # {{{
|
||||
class CompleteDelegate(QStyledItemDelegate): # {{{
|
||||
def __init__(self, parent, sep, items_func_name, space_before_sep=False):
|
||||
QStyledItemDelegate.__init__(self, parent)
|
||||
self.sep = sep
|
||||
@ -194,7 +204,7 @@ class CompleteDelegate(QStyledItemDelegate): # {{{
|
||||
QStyledItemDelegate.setModelData(self, editor, model, index)
|
||||
# }}}
|
||||
|
||||
class LanguagesDelegate(QStyledItemDelegate): # {{{
|
||||
class LanguagesDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
def createEditor(self, parent, option, index):
|
||||
editor = LanguagesEdit(parent=parent)
|
||||
@ -210,7 +220,7 @@ class LanguagesDelegate(QStyledItemDelegate): # {{{
|
||||
model.setData(index, QVariant(val), Qt.EditRole)
|
||||
# }}}
|
||||
|
||||
class CcDateDelegate(QStyledItemDelegate): # {{{
|
||||
class CcDateDelegate(QStyledItemDelegate): # {{{
|
||||
'''
|
||||
Delegate for custom columns dates. Because this delegate stores the
|
||||
format as an instance variable, a new instance must be created for each
|
||||
@ -230,12 +240,7 @@ class CcDateDelegate(QStyledItemDelegate): # {{{
|
||||
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
||||
|
||||
def createEditor(self, parent, option, index):
|
||||
qde = QStyledItemDelegate.createEditor(self, parent, option, index)
|
||||
qde.setDisplayFormat(self.format)
|
||||
qde.setMinimumDateTime(UNDEFINED_QDATETIME)
|
||||
qde.setSpecialValueText(_('Undefined'))
|
||||
qde.setCalendarPopup(True)
|
||||
return qde
|
||||
return DateTimeEdit(parent, self.format)
|
||||
|
||||
def setEditorData(self, editor, index):
|
||||
m = index.model()
|
||||
@ -254,7 +259,7 @@ class CcDateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class CcTextDelegate(QStyledItemDelegate): # {{{
|
||||
class CcTextDelegate(QStyledItemDelegate): # {{{
|
||||
'''
|
||||
Delegate for text data.
|
||||
'''
|
||||
@ -279,7 +284,7 @@ class CcTextDelegate(QStyledItemDelegate): # {{{
|
||||
model.setData(index, QVariant(val), Qt.EditRole)
|
||||
# }}}
|
||||
|
||||
class CcNumberDelegate(QStyledItemDelegate): # {{{
|
||||
class CcNumberDelegate(QStyledItemDelegate): # {{{
|
||||
'''
|
||||
Delegate for text/int/float data.
|
||||
'''
|
||||
@ -314,7 +319,7 @@ class CcNumberDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class CcEnumDelegate(QStyledItemDelegate): # {{{
|
||||
class CcEnumDelegate(QStyledItemDelegate): # {{{
|
||||
'''
|
||||
Delegate for text/int/float data.
|
||||
'''
|
||||
@ -346,7 +351,7 @@ class CcEnumDelegate(QStyledItemDelegate): # {{{
|
||||
editor.setCurrentIndex(idx)
|
||||
# }}}
|
||||
|
||||
class CcCommentsDelegate(QStyledItemDelegate): # {{{
|
||||
class CcCommentsDelegate(QStyledItemDelegate): # {{{
|
||||
'''
|
||||
Delegate for comments data.
|
||||
'''
|
||||
@ -364,7 +369,7 @@ class CcCommentsDelegate(QStyledItemDelegate): # {{{
|
||||
if hasattr(QStyle, 'CE_ItemViewItem'):
|
||||
style.drawControl(QStyle.CE_ItemViewItem, option, painter)
|
||||
ctx = QAbstractTextDocumentLayout.PaintContext()
|
||||
ctx.palette = option.palette #.setColor(QPalette.Text, QColor("red"));
|
||||
ctx.palette = option.palette # .setColor(QPalette.Text, QColor("red"));
|
||||
if hasattr(QStyle, 'SE_ItemViewItemText'):
|
||||
textRect = style.subElementRect(QStyle.SE_ItemViewItemText, option)
|
||||
painter.save()
|
||||
@ -387,7 +392,7 @@ class CcCommentsDelegate(QStyledItemDelegate): # {{{
|
||||
model.setData(index, QVariant(editor.textbox.html), Qt.EditRole)
|
||||
# }}}
|
||||
|
||||
class DelegateCB(QComboBox): # {{{
|
||||
class DelegateCB(QComboBox): # {{{
|
||||
|
||||
def __init__(self, parent):
|
||||
QComboBox.__init__(self, parent)
|
||||
@ -398,7 +403,7 @@ class DelegateCB(QComboBox): # {{{
|
||||
return QComboBox.event(self, e)
|
||||
# }}}
|
||||
|
||||
class CcBoolDelegate(QStyledItemDelegate): # {{{
|
||||
class CcBoolDelegate(QStyledItemDelegate): # {{{
|
||||
def __init__(self, parent):
|
||||
'''
|
||||
Delegate for custom_column bool data.
|
||||
@ -431,7 +436,7 @@ class CcBoolDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class CcTemplateDelegate(QStyledItemDelegate): # {{{
|
||||
class CcTemplateDelegate(QStyledItemDelegate): # {{{
|
||||
def __init__(self, parent):
|
||||
'''
|
||||
Delegate for custom_column bool data.
|
||||
@ -457,7 +462,7 @@ class CcTemplateDelegate(QStyledItemDelegate): # {{{
|
||||
validation_formatter.validate(val)
|
||||
except Exception as err:
|
||||
error_dialog(self.parent(), _('Invalid template'),
|
||||
'<p>'+_('The template %s is invalid:')%val + \
|
||||
'<p>'+_('The template %s is invalid:')%val +
|
||||
'<br>'+str(err), show=True)
|
||||
model.setData(index, QVariant(val), Qt.EditRole)
|
||||
|
||||
@ -469,3 +474,4 @@ class CcTemplateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
|
||||
|
@ -6,7 +6,7 @@ __copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import functools, re, os, traceback, errno, time
|
||||
from collections import defaultdict
|
||||
from collections import defaultdict, namedtuple
|
||||
|
||||
from PyQt4.Qt import (QAbstractTableModel, Qt, pyqtSignal, QIcon, QImage,
|
||||
QModelIndex, QVariant, QDateTime, QColor, QPixmap)
|
||||
@ -29,6 +29,8 @@ from calibre.gui2.library import DEFAULT_SORT
|
||||
from calibre.utils.localization import calibre_langcode_to_name
|
||||
from calibre.library.coloring import color_row_key
|
||||
|
||||
Counts = namedtuple('Counts', 'total current')
|
||||
|
||||
def human_readable(size, precision=1):
|
||||
""" Convert a size in bytes into megabytes """
|
||||
return ('%.'+str(precision)+'f') % ((size/(1024.*1024.)),)
|
||||
@ -46,7 +48,7 @@ def default_image():
|
||||
_default_image = QImage(I('default_cover.png'))
|
||||
return _default_image
|
||||
|
||||
class ColumnColor(object):
|
||||
class ColumnColor(object): # {{{
|
||||
|
||||
def __init__(self, formatter, colors):
|
||||
self.mi = None
|
||||
@ -70,9 +72,9 @@ class ColumnColor(object):
|
||||
return color
|
||||
except:
|
||||
pass
|
||||
# }}}
|
||||
|
||||
|
||||
class ColumnIcon(object):
|
||||
class ColumnIcon(object): # {{{
|
||||
|
||||
def __init__(self, formatter):
|
||||
self.mi = None
|
||||
@ -108,8 +110,9 @@ class ColumnIcon(object):
|
||||
return icon_bitmap
|
||||
except:
|
||||
pass
|
||||
# }}}
|
||||
|
||||
class BooksModel(QAbstractTableModel): # {{{
|
||||
class BooksModel(QAbstractTableModel): # {{{
|
||||
|
||||
about_to_be_sorted = pyqtSignal(object, name='aboutToBeSorted')
|
||||
sorting_done = pyqtSignal(object, name='sortingDone')
|
||||
@ -150,7 +153,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
self.default_image = default_image()
|
||||
self.sorted_on = DEFAULT_SORT
|
||||
self.sort_history = [self.sorted_on]
|
||||
self.last_search = '' # The last search performed on this model
|
||||
self.last_search = '' # The last search performed on this model
|
||||
self.column_map = []
|
||||
self.headers = {}
|
||||
self.alignment_map = {}
|
||||
@ -240,7 +243,6 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
# Would like to to a join here, but the thread might be waiting to
|
||||
# do something on the GUI thread. Deadlock.
|
||||
|
||||
|
||||
def refresh_ids(self, ids, current_row=-1):
|
||||
self._clear_caches()
|
||||
rows = self.db.refresh_ids(ids)
|
||||
@ -282,9 +284,16 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
self._clear_caches()
|
||||
self.count_changed_signal.emit(self.db.count())
|
||||
|
||||
def counts(self):
|
||||
if self.db.data.search_restriction_applied():
|
||||
total = self.db.data.get_search_restriction_book_count()
|
||||
else:
|
||||
total = self.db.count()
|
||||
return Counts(total, self.count())
|
||||
|
||||
def row_indices(self, index):
|
||||
''' Return list indices of all cells in index.row()'''
|
||||
return [ self.index(index.row(), c) for c in range(self.columnCount(None))]
|
||||
return [self.index(index.row(), c) for c in range(self.columnCount(None))]
|
||||
|
||||
@property
|
||||
def by_author(self):
|
||||
@ -332,7 +341,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
while True:
|
||||
row_ += 1 if forward else -1
|
||||
if row_ < 0:
|
||||
row_ = self.count() - 1;
|
||||
row_ = self.count() - 1
|
||||
elif row_ >= self.count():
|
||||
row_ = 0
|
||||
if self.id(row_) in self.ids_to_highlight_set:
|
||||
@ -611,7 +620,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
data = None
|
||||
try:
|
||||
data = self.db.cover(row_number)
|
||||
except IndexError: # Happens if database has not yet been refreshed
|
||||
except IndexError: # Happens if database has not yet been refreshed
|
||||
pass
|
||||
|
||||
if not data:
|
||||
@ -673,7 +682,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
return QVariant(UNDEFINED_QDATETIME)
|
||||
|
||||
def bool_type(r, idx=-1):
|
||||
return None # displayed using a decorator
|
||||
return None # displayed using a decorator
|
||||
|
||||
def bool_type_decorator(r, idx=-1, bool_cols_are_tristate=True):
|
||||
val = force_to_bool(self.db.data[r][idx])
|
||||
@ -884,20 +893,24 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
ans = Qt.AlignVCenter | ALIGNMENT_MAP[self.alignment_map.get(cname,
|
||||
'left')]
|
||||
return QVariant(ans)
|
||||
#elif role == Qt.ToolTipRole and index.isValid():
|
||||
# elif role == Qt.ToolTipRole and index.isValid():
|
||||
# if self.column_map[index.column()] in self.editable_cols:
|
||||
# return QVariant(_("Double click to <b>edit</b> me<br><br>"))
|
||||
return NONE
|
||||
|
||||
def headerData(self, section, orientation, role):
|
||||
if orientation == Qt.Horizontal:
|
||||
if section >= len(self.column_map): # same problem as in data, the column_map can be wrong
|
||||
if section >= len(self.column_map): # same problem as in data, the column_map can be wrong
|
||||
return None
|
||||
if role == Qt.ToolTipRole:
|
||||
ht = self.column_map[section]
|
||||
if ht == 'timestamp': # change help text because users know this field as 'date'
|
||||
if ht == 'timestamp': # change help text because users know this field as 'date'
|
||||
ht = 'date'
|
||||
return QVariant(_('The lookup/search name is "{0}"').format(ht))
|
||||
if self.db.field_metadata[self.column_map[section]]['is_category']:
|
||||
is_cat = '.\n\n' + _('Click in this column and press Q to Quickview books with the same %s' % ht)
|
||||
else:
|
||||
is_cat = ''
|
||||
return QVariant(_('The lookup/search name is "{0}"{1}').format(ht, is_cat))
|
||||
if role == Qt.DisplayRole:
|
||||
return QVariant(self.headers[self.column_map[section]])
|
||||
return NONE
|
||||
@ -905,11 +918,10 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
col = self.db.field_metadata['uuid']['rec_index']
|
||||
return QVariant(_('This book\'s UUID is "{0}"').format(self.db.data[section][col]))
|
||||
|
||||
if role == Qt.DisplayRole: # orientation is vertical
|
||||
if role == Qt.DisplayRole: # orientation is vertical
|
||||
return QVariant(section+1)
|
||||
return NONE
|
||||
|
||||
|
||||
def flags(self, index):
|
||||
flags = QAbstractTableModel.flags(self, index)
|
||||
if index.isValid():
|
||||
@ -969,7 +981,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
tmpl = unicode(value.toString()).strip()
|
||||
disp = cc['display']
|
||||
disp['composite_template'] = tmpl
|
||||
self.db.set_custom_column_metadata(cc['colnum'], display = disp)
|
||||
self.db.set_custom_column_metadata(cc['colnum'], display=disp)
|
||||
self.refresh(reset=True)
|
||||
return True
|
||||
|
||||
@ -987,7 +999,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
return self._set_data(index, value)
|
||||
except (IOError, OSError) as err:
|
||||
import traceback
|
||||
if getattr(err, 'errno', None) == errno.EACCES: # Permission denied
|
||||
if getattr(err, 'errno', None) == errno.EACCES: # Permission denied
|
||||
fname = getattr(err, 'filename', None)
|
||||
p = 'Locked file: %s\n\n'%fname if fname else ''
|
||||
error_dialog(get_gui(), _('Permission denied'),
|
||||
@ -1017,7 +1029,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
return False
|
||||
val = (int(value.toInt()[0]) if column == 'rating' else
|
||||
value.toDateTime() if column in ('timestamp', 'pubdate')
|
||||
else unicode(value.toString()).strip())
|
||||
else re.sub(ur'\s', u' ', unicode(value.toString()).strip()))
|
||||
id = self.db.id(row)
|
||||
books_to_refresh = set([id])
|
||||
if column == 'rating':
|
||||
@ -1065,7 +1077,7 @@ class BooksModel(QAbstractTableModel): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
|
||||
USABLE_LOCATIONS = [
|
||||
'all',
|
||||
@ -1078,7 +1090,6 @@ class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
'inlibrary'
|
||||
]
|
||||
|
||||
|
||||
def __init__(self, model):
|
||||
SearchQueryParser.__init__(self, locations=self.USABLE_LOCATIONS)
|
||||
self.model = model
|
||||
@ -1101,7 +1112,7 @@ class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
elif query.startswith('~'):
|
||||
matchkind = REGEXP_MATCH
|
||||
query = query[1:]
|
||||
if matchkind != REGEXP_MATCH: ### leave case in regexps because it can be significant e.g. \S \W \D
|
||||
if matchkind != REGEXP_MATCH: # leave case in regexps because it can be significant e.g. \S \W \D
|
||||
query = query.lower()
|
||||
|
||||
if location not in self.USABLE_LOCATIONS:
|
||||
@ -1133,9 +1144,9 @@ class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
if locvalue == 'inlibrary':
|
||||
continue # this is bool, so can't match below
|
||||
try:
|
||||
### Can't separate authors because comma is used for name sep and author sep
|
||||
### Exact match might not get what you want. For that reason, turn author
|
||||
### exactmatch searches into contains searches.
|
||||
# Can't separate authors because comma is used for name sep and author sep
|
||||
# Exact match might not get what you want. For that reason, turn author
|
||||
# exactmatch searches into contains searches.
|
||||
if locvalue == 'author' and matchkind == EQUALS_MATCH:
|
||||
m = CONTAINS_MATCH
|
||||
else:
|
||||
@ -1148,13 +1159,13 @@ class OnDeviceSearch(SearchQueryParser): # {{{
|
||||
if _match(query, vals, m, use_primary_find_in_search=upf):
|
||||
matches.add(index)
|
||||
break
|
||||
except ValueError: # Unicode errors
|
||||
except ValueError: # Unicode errors
|
||||
traceback.print_exc()
|
||||
return matches
|
||||
|
||||
# }}}
|
||||
|
||||
class DeviceDBSortKeyGen(object): # {{{
|
||||
class DeviceDBSortKeyGen(object): # {{{
|
||||
|
||||
def __init__(self, attr, keyfunc, db):
|
||||
self.attr = attr
|
||||
@ -1169,7 +1180,7 @@ class DeviceDBSortKeyGen(object): # {{{
|
||||
return ans
|
||||
# }}}
|
||||
|
||||
class DeviceBooksModel(BooksModel): # {{{
|
||||
class DeviceBooksModel(BooksModel): # {{{
|
||||
|
||||
booklist_dirtied = pyqtSignal()
|
||||
upload_collections = pyqtSignal(object)
|
||||
@ -1198,6 +1209,12 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
self.editable = ['title', 'authors', 'collections']
|
||||
self.book_in_library = None
|
||||
|
||||
def counts(self):
|
||||
return Counts(len(self.db), len(self.map))
|
||||
|
||||
def count_changed(self, *args):
|
||||
self.count_changed_signal.emit(len(self.db))
|
||||
|
||||
def mark_for_deletion(self, job, rows, rows_are_ids=False):
|
||||
db_indices = rows if rows_are_ids else self.indices(rows)
|
||||
db_items = [self.db[i] for i in db_indices if -1 < i < len(self.db)]
|
||||
@ -1237,11 +1254,13 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
if not succeeded:
|
||||
indices = self.row_indices(self.index(row, 0))
|
||||
self.dataChanged.emit(indices[0], indices[-1])
|
||||
self.count_changed()
|
||||
|
||||
def paths_deleted(self, paths):
|
||||
self.map = list(range(0, len(self.db)))
|
||||
self.resort(False)
|
||||
self.research(True)
|
||||
self.count_changed()
|
||||
|
||||
def is_row_marked_for_deletion(self, row):
|
||||
try:
|
||||
@ -1272,9 +1291,9 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
if index.isValid():
|
||||
cname = self.column_map[index.column()]
|
||||
if cname in self.editable and \
|
||||
(cname != 'collections' or \
|
||||
(callable(getattr(self.db, 'supports_collections', None)) and \
|
||||
self.db.supports_collections() and \
|
||||
(cname != 'collections' or
|
||||
(callable(getattr(self.db, 'supports_collections', None)) and
|
||||
self.db.supports_collections() and
|
||||
device_prefs['manage_device_metadata']=='manual')):
|
||||
flags |= Qt.ItemIsEditable
|
||||
return flags
|
||||
@ -1304,6 +1323,7 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
self.last_search = text
|
||||
if self.last_search:
|
||||
self.searched.emit(True)
|
||||
self.count_changed()
|
||||
|
||||
def research(self, reset=True):
|
||||
self.search(self.last_search, reset)
|
||||
@ -1373,6 +1393,7 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
self.map = list(range(0, len(db)))
|
||||
self.research(reset=False)
|
||||
self.resort()
|
||||
self.count_changed()
|
||||
|
||||
def cover(self, row):
|
||||
item = self.db[self.map[row]]
|
||||
@ -1432,7 +1453,7 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
return data
|
||||
|
||||
def paths(self, rows):
|
||||
return [self.db[self.map[r.row()]].path for r in rows ]
|
||||
return [self.db[self.map[r.row()]].path for r in rows]
|
||||
|
||||
def paths_for_db_ids(self, db_ids, as_map=False):
|
||||
res = defaultdict(list) if as_map else []
|
||||
@ -1517,7 +1538,7 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
elif role == Qt.ToolTipRole and index.isValid():
|
||||
if self.is_row_marked_for_deletion(row):
|
||||
return QVariant(_('Marked for deletion'))
|
||||
if cname in ['title', 'authors'] or (cname == 'collections' and \
|
||||
if cname in ['title', 'authors'] or (cname == 'collections' and
|
||||
self.db.supports_collections()):
|
||||
return QVariant(_("Double click to <b>edit</b> me<br><br>"))
|
||||
elif role == Qt.DecorationRole and cname == 'inlibrary':
|
||||
@ -1586,3 +1607,4 @@ class DeviceBooksModel(BooksModel): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
|
||||
|
@ -13,7 +13,7 @@ from PyQt4.Qt import (Qt, QDateTimeEdit, pyqtSignal, QMessageBox, QIcon,
|
||||
QToolButton, QWidget, QLabel, QGridLayout, QApplication,
|
||||
QDoubleSpinBox, QListWidgetItem, QSize, QPixmap, QDialog, QMenu,
|
||||
QPushButton, QSpinBox, QLineEdit, QSizePolicy, QDialogButtonBox,
|
||||
QAction, QCalendarWidget, QDate)
|
||||
QAction, QCalendarWidget, QDate, QDateTime)
|
||||
|
||||
from calibre.gui2.widgets import EnLineEdit, FormatList as _FormatList, ImageView
|
||||
from calibre.utils.icu import sort_key
|
||||
@ -45,6 +45,9 @@ def save_dialog(parent, title, msg, det_msg=''):
|
||||
d.setStandardButtons(QMessageBox.Yes | QMessageBox.No | QMessageBox.Cancel)
|
||||
return d.exec_()
|
||||
|
||||
def clean_text(x):
|
||||
return re.sub(r'\s', ' ', x.strip())
|
||||
|
||||
'''
|
||||
The interface common to all widgets used to set basic metadata
|
||||
class BasicMetadataWidget(object):
|
||||
@ -117,7 +120,7 @@ class TitleEdit(EnLineEdit):
|
||||
def current_val(self):
|
||||
|
||||
def fget(self):
|
||||
title = unicode(self.text()).strip()
|
||||
title = clean_text(unicode(self.text()))
|
||||
if not title:
|
||||
title = self.get_default()
|
||||
return title
|
||||
@ -289,7 +292,7 @@ class AuthorsEdit(EditWithComplete):
|
||||
def current_val(self):
|
||||
|
||||
def fget(self):
|
||||
au = unicode(self.text()).strip()
|
||||
au = clean_text(unicode(self.text()))
|
||||
if not au:
|
||||
au = self.get_default()
|
||||
return string_to_authors(au)
|
||||
@ -352,7 +355,7 @@ class AuthorSortEdit(EnLineEdit):
|
||||
def current_val(self):
|
||||
|
||||
def fget(self):
|
||||
return unicode(self.text()).strip()
|
||||
return clean_text(unicode(self.text()))
|
||||
|
||||
def fset(self, val):
|
||||
if not val:
|
||||
@ -472,7 +475,7 @@ class SeriesEdit(EditWithComplete):
|
||||
def current_val(self):
|
||||
|
||||
def fget(self):
|
||||
return unicode(self.currentText()).strip()
|
||||
return clean_text(unicode(self.currentText()))
|
||||
|
||||
def fset(self, val):
|
||||
if not val:
|
||||
@ -1135,7 +1138,7 @@ class TagsEdit(EditWithComplete): # {{{
|
||||
@dynamic_property
|
||||
def current_val(self):
|
||||
def fget(self):
|
||||
return [x.strip() for x in unicode(self.text()).split(',')]
|
||||
return [clean_text(x) for x in unicode(self.text()).split(',')]
|
||||
def fset(self, val):
|
||||
if not val:
|
||||
val = []
|
||||
@ -1237,7 +1240,7 @@ class IdentifiersEdit(QLineEdit): # {{{
|
||||
def current_val(self):
|
||||
def fget(self):
|
||||
raw = unicode(self.text()).strip()
|
||||
parts = [x.strip() for x in raw.split(',')]
|
||||
parts = [clean_text(x) for x in raw.split(',')]
|
||||
ans = {}
|
||||
for x in parts:
|
||||
c = x.split(':')
|
||||
@ -1376,7 +1379,7 @@ class PublisherEdit(EditWithComplete): # {{{
|
||||
def current_val(self):
|
||||
|
||||
def fget(self):
|
||||
return unicode(self.currentText()).strip()
|
||||
return clean_text(unicode(self.currentText()))
|
||||
|
||||
def fset(self, val):
|
||||
if not val:
|
||||
@ -1472,6 +1475,16 @@ class DateEdit(QDateTimeEdit):
|
||||
o, c = self.original_val, self.current_val
|
||||
return o != c
|
||||
|
||||
def keyPressEvent(self, ev):
|
||||
if ev.key() == Qt.Key_Minus:
|
||||
ev.accept()
|
||||
self.setDateTime(self.minimumDateTime())
|
||||
elif ev.key() == Qt.Key_Equal:
|
||||
ev.accept()
|
||||
self.setDateTime(QDateTime.currentDateTime())
|
||||
else:
|
||||
return QDateTimeEdit.keyPressEvent(self, ev)
|
||||
|
||||
class PubdateEdit(DateEdit):
|
||||
LABEL = _('Publishe&d:')
|
||||
FMT = 'MMM yyyy'
|
||||
|
@ -146,8 +146,12 @@ class CreateVirtualLibrary(QDialog): # {{{
|
||||
|
||||
<p>For example you can use a Virtual Library to only show you books with the Tag <i>"Unread"</i>
|
||||
or only books by <i>"My Favorite Author"</i> or only books in a particular series.</p>
|
||||
|
||||
<p>More information and examples are available in the
|
||||
<a href="http://manual.calibre-ebook.com/virtual_libraries.html">User Manual</a>.</p>
|
||||
'''))
|
||||
hl.setWordWrap(True)
|
||||
hl.setOpenExternalLinks(True)
|
||||
hl.setFrameStyle(hl.StyledPanel)
|
||||
gl.addWidget(hl, 0, 3, 4, 1)
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -54,14 +54,13 @@ class FoylesUKStore(BasicStoreConfig, StorePlugin):
|
||||
id_ = ''.join(data.xpath('.//p[@class="doc-cover"]/a/@href')).strip()
|
||||
if not id_:
|
||||
continue
|
||||
id_ = 'http://ebooks.foyles.co.uk' + id_
|
||||
|
||||
cover_url = ''.join(data.xpath('.//p[@class="doc-cover"]/a/img/@src'))
|
||||
title = ''.join(data.xpath('.//span[@class="title"]/a/text()'))
|
||||
author = ', '.join(data.xpath('.//span[@class="author"]/span[@class="author"]/text()'))
|
||||
price = ''.join(data.xpath('.//span[@itemprop="price"]/text()'))
|
||||
price = ''.join(data.xpath('.//span[@itemprop="price"]/text()')).strip()
|
||||
format_ = ''.join(data.xpath('.//p[@class="doc-meta-format"]/span[last()]/text()'))
|
||||
format_, ign, drm = format_.partition(' ')
|
||||
drm = SearchResult.DRM_LOCKED if 'DRM' in drm else SearchResult.DRM_UNLOCKED
|
||||
|
||||
counter -= 1
|
||||
|
||||
@ -71,7 +70,7 @@ class FoylesUKStore(BasicStoreConfig, StorePlugin):
|
||||
s.author = author.strip()
|
||||
s.price = price
|
||||
s.detail_item = id_
|
||||
s.drm = drm
|
||||
s.drm = SearchResult.DRM_LOCKED
|
||||
s.formats = format_
|
||||
|
||||
yield s
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
store_version = 4 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011-2013, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
@ -9,6 +9,7 @@ __docformat__ = 'restructuredtext en'
|
||||
|
||||
import re
|
||||
import urllib
|
||||
from base64 import b64encode
|
||||
from contextlib import closing
|
||||
|
||||
from lxml import html
|
||||
@ -25,21 +26,19 @@ from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||
class WoblinkStore(BasicStoreConfig, StorePlugin):
|
||||
|
||||
def open(self, parent=None, detail_item=None, external=False):
|
||||
#aff_root = 'https://www.a4b-tracking.com/pl/stat-click-text-link/16/58/'
|
||||
aff_root = 'https://www.a4b-tracking.com/pl/stat-click-text-link/16/58/'
|
||||
url = 'http://woblink.com/publication'
|
||||
|
||||
#aff_url = aff_root + str(b64encode(url))
|
||||
aff_url = aff_root + str(b64encode(url))
|
||||
detail_url = None
|
||||
|
||||
if detail_item:
|
||||
detail_url = 'http://woblink.com' + detail_item #aff_root + str(b64encode('http://woblink.com' + detail_item))
|
||||
detail_url = aff_root + str(b64encode('http://woblink.com' + detail_item))
|
||||
|
||||
if external or self.config.get('open_external', False):
|
||||
#open_url(QUrl(url_slash_cleaner(detail_url if detail_url else aff_url)))
|
||||
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else url)))
|
||||
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else aff_url)))
|
||||
else:
|
||||
#d = WebStoreDialog(self.gui, url, parent, detail_url if detail_url else aff_url)
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_url if detail_url else url)
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_url if detail_url else aff_url)
|
||||
d.setWindowTitle(self.name)
|
||||
d.set_tags(self.config.get('tags', ''))
|
||||
d.exec_()
|
||||
|
@ -325,6 +325,11 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
||||
if self.library_view.model().rowCount(None) < 3:
|
||||
self.library_view.resizeColumnsToContents()
|
||||
|
||||
for view in ('library', 'memory', 'card_a', 'card_b'):
|
||||
v = getattr(self, '%s_view' % view)
|
||||
v.selectionModel().selectionChanged.connect(self.update_status_bar)
|
||||
v.model().count_changed_signal.connect(self.update_status_bar)
|
||||
|
||||
self.library_view.model().count_changed()
|
||||
self.bars_manager.database_changed(self.library_view.model().db)
|
||||
self.library_view.model().database_changed.connect(self.bars_manager.database_changed,
|
||||
@ -661,6 +666,7 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
||||
# Reset the view in case something changed while it was invisible
|
||||
self.current_view().reset()
|
||||
self.set_number_of_books_shown()
|
||||
self.update_status_bar()
|
||||
|
||||
def job_exception(self, job, dialog_title=_('Conversion Error')):
|
||||
if not hasattr(self, '_modeless_dialogs'):
|
||||
|
@ -41,7 +41,6 @@ class JavaScriptLoader(object):
|
||||
'hyphenation', 'hyphenator', 'utils', 'cfi', 'indexing', 'paged',
|
||||
'fs', 'math', 'extract')
|
||||
|
||||
|
||||
def __init__(self, dynamic_coffeescript=False):
|
||||
self._dynamic_coffeescript = dynamic_coffeescript
|
||||
if self._dynamic_coffeescript:
|
||||
@ -68,7 +67,8 @@ class JavaScriptLoader(object):
|
||||
allow_user_override=False).decode('utf-8')
|
||||
else:
|
||||
dynamic = (self._dynamic_coffeescript and
|
||||
os.path.exists(calibre.__file__))
|
||||
calibre.__file__ and not calibre.__file__.endswith('.pyo') and
|
||||
os.path.exists(calibre.__file__))
|
||||
ans = compiled_coffeescript(src, dynamic=dynamic).decode('utf-8')
|
||||
self._cache[name] = ans
|
||||
|
||||
@ -105,4 +105,3 @@ class JavaScriptLoader(object):
|
||||
evaljs('\n\n'.join(self._hp_cache.itervalues()))
|
||||
|
||||
return lang
|
||||
|
||||
|
@ -24,7 +24,7 @@ from calibre.gui2.dnd import (dnd_has_image, dnd_get_image, dnd_get_files,
|
||||
|
||||
history = XMLConfig('history')
|
||||
|
||||
class ProgressIndicator(QWidget): # {{{
|
||||
class ProgressIndicator(QWidget): # {{{
|
||||
|
||||
def __init__(self, *args):
|
||||
QWidget.__init__(self, *args)
|
||||
@ -57,7 +57,7 @@ class ProgressIndicator(QWidget): # {{{
|
||||
self.setVisible(False)
|
||||
# }}}
|
||||
|
||||
class FilenamePattern(QWidget, Ui_Form): # {{{
|
||||
class FilenamePattern(QWidget, Ui_Form): # {{{
|
||||
|
||||
changed_signal = pyqtSignal()
|
||||
|
||||
@ -82,7 +82,8 @@ class FilenamePattern(QWidget, Ui_Form): # {{{
|
||||
val = prefs['filename_pattern']
|
||||
self.re.lineEdit().setText(val)
|
||||
|
||||
val_hist += gprefs.get('filename_pattern_history', ['(?P<title>.+)', '(?P<author>[^_-]+) -?\s*(?P<series>[^_0-9-]*)(?P<series_index>[0-9]*)\s*-\s*(?P<title>[^_].+) ?'])
|
||||
val_hist += gprefs.get('filename_pattern_history', [
|
||||
'(?P<title>.+)', '(?P<author>[^_-]+) -?\s*(?P<series>[^_0-9-]*)(?P<series_index>[0-9]*)\s*-\s*(?P<title>[^_].+) ?'])
|
||||
if val in val_hist:
|
||||
del val_hist[val_hist.index(val)]
|
||||
val_hist.insert(0, val)
|
||||
@ -136,7 +137,6 @@ class FilenamePattern(QWidget, Ui_Form): # {{{
|
||||
|
||||
self.isbn.setText(_('No match') if mi.isbn is None else str(mi.isbn))
|
||||
|
||||
|
||||
def pattern(self):
|
||||
pat = unicode(self.re.lineEdit().text())
|
||||
return re.compile(pat)
|
||||
@ -157,7 +157,7 @@ class FilenamePattern(QWidget, Ui_Form): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class FormatList(QListWidget): # {{{
|
||||
class FormatList(QListWidget): # {{{
|
||||
DROPABBLE_EXTENSIONS = BOOK_EXTENSIONS
|
||||
formats_dropped = pyqtSignal(object, object)
|
||||
delete_format = pyqtSignal()
|
||||
@ -186,7 +186,6 @@ class FormatList(QListWidget): # {{{
|
||||
if d.err is None:
|
||||
self.formats_dropped.emit(event, [d.fpath])
|
||||
|
||||
|
||||
def dragMoveEvent(self, event):
|
||||
event.acceptProposedAction()
|
||||
|
||||
@ -198,7 +197,7 @@ class FormatList(QListWidget): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class ImageDropMixin(object): # {{{
|
||||
class ImageDropMixin(object): # {{{
|
||||
'''
|
||||
Adds support for dropping images onto widgets and a context menu for
|
||||
copy/pasting images.
|
||||
@ -272,7 +271,7 @@ class ImageDropMixin(object): # {{{
|
||||
pixmap_to_data(pmap))
|
||||
# }}}
|
||||
|
||||
class ImageView(QWidget, ImageDropMixin): # {{{
|
||||
class ImageView(QWidget, ImageDropMixin): # {{{
|
||||
|
||||
BORDER_WIDTH = 1
|
||||
cover_changed = pyqtSignal(object)
|
||||
@ -338,7 +337,7 @@ class ImageView(QWidget, ImageDropMixin): # {{{
|
||||
p.end()
|
||||
# }}}
|
||||
|
||||
class CoverView(QGraphicsView, ImageDropMixin): # {{{
|
||||
class CoverView(QGraphicsView, ImageDropMixin): # {{{
|
||||
|
||||
cover_changed = pyqtSignal(object)
|
||||
|
||||
@ -393,7 +392,7 @@ class BasicList(QListWidget):
|
||||
yield self.item(i)
|
||||
# }}}
|
||||
|
||||
class LineEditECM(object): # {{{
|
||||
class LineEditECM(object): # {{{
|
||||
|
||||
'''
|
||||
Extend the context menu of a QLineEdit to include more actions.
|
||||
@ -438,7 +437,7 @@ class LineEditECM(object): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class EnLineEdit(LineEditECM, QLineEdit): # {{{
|
||||
class EnLineEdit(LineEditECM, QLineEdit): # {{{
|
||||
|
||||
'''
|
||||
Enhanced QLineEdit.
|
||||
@ -449,7 +448,7 @@ class EnLineEdit(LineEditECM, QLineEdit): # {{{
|
||||
pass
|
||||
# }}}
|
||||
|
||||
class ItemsCompleter(QCompleter): # {{{
|
||||
class ItemsCompleter(QCompleter): # {{{
|
||||
|
||||
'''
|
||||
A completer object that completes a list of tags. It is used in conjunction
|
||||
@ -541,7 +540,7 @@ class CompleteLineEdit(EnLineEdit): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class EnComboBox(QComboBox): # {{{
|
||||
class EnComboBox(QComboBox): # {{{
|
||||
|
||||
'''
|
||||
Enhanced QComboBox.
|
||||
@ -567,7 +566,7 @@ class EnComboBox(QComboBox): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class CompleteComboBox(EnComboBox): # {{{
|
||||
class CompleteComboBox(EnComboBox): # {{{
|
||||
|
||||
def __init__(self, *args):
|
||||
EnComboBox.__init__(self, *args)
|
||||
@ -584,7 +583,7 @@ class CompleteComboBox(EnComboBox): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class HistoryLineEdit(QComboBox): # {{{
|
||||
class HistoryLineEdit(QComboBox): # {{{
|
||||
|
||||
lost_focus = pyqtSignal()
|
||||
|
||||
@ -637,7 +636,7 @@ class HistoryLineEdit(QComboBox): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class ComboBoxWithHelp(QComboBox): # {{{
|
||||
class ComboBoxWithHelp(QComboBox): # {{{
|
||||
'''
|
||||
A combobox where item 0 is help text. CurrentText will return '' for item 0.
|
||||
Be sure to always fetch the text with currentText. Don't use the signals
|
||||
@ -686,7 +685,7 @@ class ComboBoxWithHelp(QComboBox): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class EncodingComboBox(QComboBox): # {{{
|
||||
class EncodingComboBox(QComboBox): # {{{
|
||||
'''
|
||||
A combobox that holds text encodings support
|
||||
by Python. This is only populated with the most
|
||||
@ -711,7 +710,7 @@ class EncodingComboBox(QComboBox): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
|
||||
Rules = []
|
||||
Formats = {}
|
||||
@ -736,13 +735,11 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
|
||||
CONSTANTS = ["False", "True", "None", "NotImplemented", "Ellipsis"]
|
||||
|
||||
|
||||
def __init__(self, parent=None):
|
||||
super(PythonHighlighter, self).__init__(parent)
|
||||
if not self.Config:
|
||||
self.loadConfig()
|
||||
|
||||
|
||||
self.initializeFormats()
|
||||
|
||||
PythonHighlighter.Rules.append((QRegExp(
|
||||
@ -752,7 +749,7 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
"|".join([r"\b%s\b" % builtin for builtin in self.BUILTINS])),
|
||||
"builtin"))
|
||||
PythonHighlighter.Rules.append((QRegExp(
|
||||
"|".join([r"\b%s\b" % constant \
|
||||
"|".join([r"\b%s\b" % constant
|
||||
for constant in self.CONSTANTS])), "constant"))
|
||||
PythonHighlighter.Rules.append((QRegExp(
|
||||
r"\b[+-]?[0-9]+[lL]?\b"
|
||||
@ -812,7 +809,6 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
Config["%sfontbold" % name] = QVariant(bold).toBool()
|
||||
Config["%sfontitalic" % name] = QVariant(italic).toBool()
|
||||
|
||||
|
||||
@classmethod
|
||||
def initializeFormats(cls):
|
||||
Config = cls.Config
|
||||
@ -829,7 +825,6 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
format.setFontItalic(Config["%sfontitalic" % name])
|
||||
PythonHighlighter.Formats[name] = format
|
||||
|
||||
|
||||
def highlightBlock(self, text):
|
||||
NORMAL, TRIPLESINGLE, TRIPLEDOUBLE, ERROR = range(4)
|
||||
|
||||
@ -861,7 +856,7 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
|
||||
# Slow but good quality highlighting for comments. For more
|
||||
# speed, comment this out and add the following to __init__:
|
||||
# PythonHighlighter.Rules.append((QRegExp(r"#.*"), "comment"))
|
||||
# PythonHighlighter.Rules.append((QRegExp(r"#.*"), "comment"))
|
||||
if text.isEmpty():
|
||||
pass
|
||||
elif text[0] == "#":
|
||||
@ -900,7 +895,6 @@ class PythonHighlighter(QSyntaxHighlighter): # {{{
|
||||
self.setFormat(i, text.length(),
|
||||
PythonHighlighter.Formats["string"])
|
||||
|
||||
|
||||
def rehighlight(self):
|
||||
QApplication.setOverrideCursor(QCursor(Qt.WaitCursor))
|
||||
QSyntaxHighlighter.rehighlight(self)
|
||||
@ -955,8 +949,8 @@ class LayoutButton(QToolButton):
|
||||
|
||||
def set_state_to_hide(self, *args):
|
||||
self.setChecked(True)
|
||||
self.setText(_('Hide %(label)s %(shortcut)s'%dict(
|
||||
label=self.label, shortcut=self.shortcut)))
|
||||
self.setText(_('Hide %(label)s %(shortcut)s')%dict(
|
||||
label=self.label, shortcut=self.shortcut))
|
||||
self.setToolTip(self.text())
|
||||
self.setStatusTip(self.text())
|
||||
|
||||
@ -1045,11 +1039,13 @@ class Splitter(QSplitter):
|
||||
@dynamic_property
|
||||
def side_index_size(self):
|
||||
def fget(self):
|
||||
if self.count() < 2: return 0
|
||||
if self.count() < 2:
|
||||
return 0
|
||||
return self.sizes()[self.side_index]
|
||||
|
||||
def fset(self, val):
|
||||
if self.count() < 2: return
|
||||
if self.count() < 2:
|
||||
return
|
||||
if val == 0 and not self.is_side_index_hidden:
|
||||
self.save_state()
|
||||
sizes = list(self.sizes())
|
||||
@ -1081,7 +1077,8 @@ class Splitter(QSplitter):
|
||||
self.resize_timer.start()
|
||||
|
||||
def get_state(self):
|
||||
if self.count() < 2: return (False, 200)
|
||||
if self.count() < 2:
|
||||
return (False, 200)
|
||||
return (self.desired_show, self.desired_side_size)
|
||||
|
||||
def apply_state(self, state, save_desired=True):
|
||||
@ -1142,3 +1139,4 @@ class Splitter(QSplitter):
|
||||
|
||||
# }}}
|
||||
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user