mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-07 10:14:46 -04:00
sync with Kovid's branch
This commit is contained in:
commit
43516e60f2
@ -1,3 +1,4 @@
|
||||
# vim:fileencoding=UTF-8:ts=2:sw=2:sta:et:sts=2:ai
|
||||
# Each release can have new features and bug fixes. Each of which
|
||||
# must have a title and can optionally have linked tickets and a description.
|
||||
# In addition they can have a type field which defaults to minor, but should be major
|
||||
@ -19,6 +20,49 @@
|
||||
# new recipes:
|
||||
# - title:
|
||||
|
||||
- version: 0.9.25
|
||||
date: 2013-03-29
|
||||
|
||||
new features:
|
||||
- title: "Automatic adding: When checking for duplicates is enabled, use the same duplicates found dialog as is used during manual adding."
|
||||
tickets: [1160914]
|
||||
|
||||
- title: "ToC Editor: Allow searching to find a location quickly when browsing through the book to select a location for a ToC item"
|
||||
|
||||
- title: "ToC Editor: Add a button to quickly flatten the entire table of contents"
|
||||
|
||||
- title: "Conversion: When converting a single book to EPUB or AZW3, add an option to automatically launch the Table of Contents editor after the conversion completes. Found under the Table of Contents section of the conversion dialog."
|
||||
|
||||
bug fixes:
|
||||
- title: "calibredb: Nicer error messages when user provides invalid input"
|
||||
tickets: [1160452,1160631]
|
||||
|
||||
- title: "News download: Always use the .jpg extension for jpeg images as apparently Moon+ Reader cannot handle .jpeg"
|
||||
|
||||
- title: "Fix Book Details popup keyboard navigation doesn't work on a Mac"
|
||||
tickets: [1159610]
|
||||
|
||||
- title: "Fix a regression that caused the case of the book files to not be changed when changing the case of the title/author on case insensitive filesystems"
|
||||
|
||||
improved recipes:
|
||||
- RTE news
|
||||
- Various Polish news sources
|
||||
- Psychology Today
|
||||
- Foreign Affairs
|
||||
- History Today
|
||||
- Harpers Magazine (printed edition)
|
||||
- Business Week Magazine
|
||||
- The Hindu
|
||||
- Irish Times
|
||||
- Le Devoir
|
||||
|
||||
new recipes:
|
||||
- title: Fortune Magazine
|
||||
author: Rick Shang
|
||||
|
||||
- title: Eclipse Online
|
||||
author: Jim DeVona
|
||||
|
||||
- version: 0.9.24
|
||||
date: 2013-03-22
|
||||
|
||||
|
@ -750,8 +750,61 @@ If this property is detected by |app|, the following custom properties are recog
|
||||
opf.series
|
||||
opf.seriesindex
|
||||
|
||||
In addition to this, you can specify the picture to use as the cover by naming it ``opf.cover`` (right click, Picture->Options->Name) in the ODT. If no picture with this name is found, the 'smart' method is used.
|
||||
As the cover detection might result in double covers in certain output formats, the process will remove the paragraph (only if the only content is the cover!) from the document. But this works only with the named picture!
|
||||
In addition to this, you can specify the picture to use as the cover by naming
|
||||
it ``opf.cover`` (right click, Picture->Options->Name) in the ODT. If no
|
||||
picture with this name is found, the 'smart' method is used. As the cover
|
||||
detection might result in double covers in certain output formats, the process
|
||||
will remove the paragraph (only if the only content is the cover!) from the
|
||||
document. But this works only with the named picture!
|
||||
|
||||
To disable cover detection you can set the custom property ``opf.nocover`` ('Yes or No' type) to Yes in advanced mode.
|
||||
|
||||
Converting to PDF
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The first, most important, setting to decide on when converting to PDF is the page
|
||||
size. By default, |app| uses a page size defined by the current
|
||||
:guilabel:`Output profile`. So if your output profile is set to Kindle, |app|
|
||||
will create a PDF with page size suitable for viewing on the small kindle
|
||||
screen. However, if you view this PDF file on a computer screen, then it will
|
||||
appear to have too large fonts. To create "normal" sized PDFs, use the override
|
||||
page size option under :guilabel:`PDF Output` in the conversion dialog.
|
||||
|
||||
You can insert arbitrary headers and footers on each page of the PDF by
|
||||
specifying header and footer templates. Templates are just snippets of HTML
|
||||
code that get rendered in the header and footer locations. For example, to
|
||||
display page numbers centered at the bottom of every page, in green, use the following
|
||||
footer template::
|
||||
|
||||
<p style="text-align:center; color:green">Page _PAGENUM_</p>
|
||||
|
||||
|app| will automatically replace _PAGENUM_ with the current page number. You
|
||||
can even put different content on even and odd pages, for example the following
|
||||
header template will show the title on odd pages and the author on even pages::
|
||||
|
||||
<p style="text-align:right"><span class="even_page">_AUTHOR_</span><span class="odd_page"><i>_TITLE_</i></span></p>
|
||||
|
||||
|app| will automatically replace _TITLE_ and _AUTHOR_ with the title and author
|
||||
of the document being converted. You can also display text at the left and
|
||||
right edges and change the font size, as demonstrated with this header
|
||||
template::
|
||||
|
||||
<div style="font-size:x-small"><p style="float:left">_TITLE_</p><p style="float:right;"><i>_AUTHOR_</i></p></div>
|
||||
|
||||
This will display the title at the left and the author at the right, in a font
|
||||
size smaller than the main text.
|
||||
|
||||
Finally, you can also use the current section in templates, as shown below::
|
||||
|
||||
<p style="text-align:right">_SECTION_</p>
|
||||
|
||||
_SECTION_ is replaced by whatever the name of the current section is. These
|
||||
names are taken from the metadata Table of Contents in the document (the PDF
|
||||
Outline). If the document has no table of contents then it will be replaced by
|
||||
empty text. If a single PDF page has multiple sections, the first section on
|
||||
the page will be used.
|
||||
|
||||
.. note:: When adding headers and footers make sure you set the page top and
|
||||
bottom margins to large enough values, under the Page Setup section of the
|
||||
conversion dialog.
|
||||
|
||||
|
@ -129,11 +129,11 @@ tool that always produces valid EPUBs, |app| is not for you.
|
||||
|
||||
How do I use some of the advanced features of the conversion tools?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
You can get help on any individual feature of the converters by mousing over
|
||||
it in the GUI or running ``ebook-convert dummy.html .epub -h`` at a terminal.
|
||||
A good place to start is to look at the following demo files that demonstrate
|
||||
some of the advanced features:
|
||||
* `html-demo.zip <http://calibre-ebook.com/downloads/html-demo.zip>`_
|
||||
You can get help on any individual feature of the converters by mousing over
|
||||
it in the GUI or running ``ebook-convert dummy.html .epub -h`` at a terminal.
|
||||
A good place to start is to look at the following demo file that demonstrates
|
||||
some of the advanced features
|
||||
`html-demo.zip <http://calibre-ebook.com/downloads/html-demo.zip>`_
|
||||
|
||||
|
||||
Device Integration
|
||||
|
54
recipes/arret_sur_images.recipe
Normal file
54
recipes/arret_sur_images.recipe
Normal file
@ -0,0 +1,54 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__license__ = 'WTFPL'
|
||||
__author__ = '2013, François D. <franek at chicour.net>'
|
||||
__description__ = 'Get some fresh news from Arrêt sur images'
|
||||
|
||||
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
|
||||
class Asi(BasicNewsRecipe):
|
||||
|
||||
title = 'Arrêt sur images'
|
||||
__author__ = 'François D. (aka franek)'
|
||||
description = 'Global news in french from news site "Arrêt sur images"'
|
||||
|
||||
oldest_article = 7.0
|
||||
language = 'fr'
|
||||
needs_subscription = True
|
||||
max_articles_per_feed = 100
|
||||
|
||||
simultaneous_downloads = 1
|
||||
timefmt = '[%a, %d %b %Y %I:%M +0200]'
|
||||
cover_url = 'http://www.arretsurimages.net/images/header/menu/menu_1.png'
|
||||
|
||||
use_embedded_content = False
|
||||
no_stylesheets = True
|
||||
remove_javascript = True
|
||||
|
||||
feeds = [
|
||||
('vite dit et gratuit', 'http://www.arretsurimages.net/vite-dit.rss'),
|
||||
('Toutes les chroniques', 'http://www.arretsurimages.net/chroniques.rss'),
|
||||
('Contenus et dossiers', 'http://www.arretsurimages.net/dossiers.rss'),
|
||||
]
|
||||
|
||||
conversion_options = { 'smarten_punctuation' : True }
|
||||
|
||||
remove_tags = [dict(id='vite-titre'), dict(id='header'), dict(id='wrap-connexion'), dict(id='col_right'), dict(name='div', attrs={'class':'bloc-chroniqueur-2'}), dict(id='footercontainer')]
|
||||
|
||||
def print_version(self, url):
|
||||
return url.replace('contenu.php', 'contenu-imprimable.php')
|
||||
|
||||
def get_browser(self):
|
||||
# Need to use robust HTML parser
|
||||
br = BasicNewsRecipe.get_browser(self, use_robust_parser=True)
|
||||
if self.username is not None and self.password is not None:
|
||||
br.open('http://www.arretsurimages.net/index.php')
|
||||
br.select_form(nr=0)
|
||||
br.form.set_all_readonly(False)
|
||||
br['redir'] = 'forum/login.php'
|
||||
br['username'] = self.username
|
||||
br['password'] = self.password
|
||||
br.submit()
|
||||
return br
|
||||
|
@ -37,68 +37,15 @@ class BusinessWeek(BasicNewsRecipe):
|
||||
, 'language' : language
|
||||
}
|
||||
|
||||
#remove_tags = [
|
||||
#dict(attrs={'class':'inStory'})
|
||||
#,dict(name=['meta','link','iframe','base','embed','object','table','th','tr','td'])
|
||||
#,dict(attrs={'id':['inset','videoDisplay']})
|
||||
#]
|
||||
#keep_only_tags = [dict(name='div', attrs={'id':['story-body','storyBody']})]
|
||||
remove_attributes = ['lang']
|
||||
match_regexps = [r'http://www.businessweek.com/.*_page_[1-9].*']
|
||||
|
||||
|
||||
feeds = [
|
||||
(u'Top Stories', u'http://www.businessweek.com/topStories/rss/topStories.rss'),
|
||||
(u'Top News' , u'http://www.businessweek.com/rss/bwdaily.rss' ),
|
||||
(u'Asia', u'http://www.businessweek.com/rss/asia.rss'),
|
||||
(u'Autos', u'http://www.businessweek.com/rss/autos/index.rss'),
|
||||
(u'Classic Cars', u'http://rss.businessweek.com/bw_rss/classiccars'),
|
||||
(u'Hybrids', u'http://rss.businessweek.com/bw_rss/hybrids'),
|
||||
(u'Europe', u'http://www.businessweek.com/rss/europe.rss'),
|
||||
(u'Auto Reviews', u'http://rss.businessweek.com/bw_rss/autoreviews'),
|
||||
(u'Innovation & Design', u'http://www.businessweek.com/rss/innovate.rss'),
|
||||
(u'Architecture', u'http://www.businessweek.com/rss/architecture.rss'),
|
||||
(u'Brand Equity', u'http://www.businessweek.com/rss/brandequity.rss'),
|
||||
(u'Auto Design', u'http://www.businessweek.com/rss/carbuff.rss'),
|
||||
(u'Game Room', u'http://rss.businessweek.com/bw_rss/gameroom'),
|
||||
(u'Technology', u'http://www.businessweek.com/rss/technology.rss'),
|
||||
(u'Investing', u'http://rss.businessweek.com/bw_rss/investor'),
|
||||
(u'Small Business', u'http://www.businessweek.com/rss/smallbiz.rss'),
|
||||
(u'Careers', u'http://rss.businessweek.com/bw_rss/careers'),
|
||||
(u'B-Schools', u'http://www.businessweek.com/rss/bschools.rss'),
|
||||
(u'Magazine Selections', u'http://www.businessweek.com/rss/magazine.rss'),
|
||||
(u'CEO Guide to Tech', u'http://www.businessweek.com/rss/ceo_guide_tech.rss'),
|
||||
(u'Top Stories', u'http://www.businessweek.com/feeds/most-popular.rss'),
|
||||
]
|
||||
|
||||
def get_article_url(self, article):
|
||||
url = article.get('guid', None)
|
||||
if 'podcasts' in url:
|
||||
return None
|
||||
if 'surveys' in url:
|
||||
return None
|
||||
if 'images' in url:
|
||||
return None
|
||||
if 'feedroom' in url:
|
||||
return None
|
||||
if '/magazine/toc/' in url:
|
||||
return None
|
||||
rurl, sep, rest = url.rpartition('?')
|
||||
if rurl:
|
||||
return rurl
|
||||
return rest
|
||||
|
||||
def print_version(self, url):
|
||||
if '/news/' in url or '/blog/ in url':
|
||||
return url
|
||||
rurl = url.replace('http://www.businessweek.com/','http://www.businessweek.com/print/')
|
||||
return rurl.replace('/investing/','/investor/')
|
||||
soup = self.index_to_soup(url)
|
||||
prntver = soup.find('li', attrs={'class':'print tracked'})
|
||||
rurl = prntver.find('a', href=True)['href']
|
||||
return rurl
|
||||
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
for item in soup.findAll(style=True):
|
||||
del item['style']
|
||||
for alink in soup.findAll('a'):
|
||||
if alink.string is not None:
|
||||
tstr = alink.string
|
||||
alink.replaceWith(tstr)
|
||||
return soup
|
||||
|
||||
|
@ -11,8 +11,8 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
category = 'news'
|
||||
encoding = 'UTF-8'
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'id':'article_body_container'}),
|
||||
]
|
||||
dict(name='div', attrs={'id':'article_body_container'}),
|
||||
]
|
||||
remove_tags = [dict(name='ui'),dict(name='li'),dict(name='div', attrs={'id':['share-email']})]
|
||||
no_javascript = True
|
||||
no_stylesheets = True
|
||||
@ -25,6 +25,7 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
|
||||
#Find date
|
||||
mag=soup.find('h2',text='Magazine')
|
||||
self.log(mag)
|
||||
dates=self.tag_to_string(mag.findNext('h3'))
|
||||
self.timefmt = u' [%s]'%dates
|
||||
|
||||
@ -32,7 +33,7 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
div0 = soup.find ('div', attrs={'class':'column left'})
|
||||
section_title = ''
|
||||
feeds = OrderedDict()
|
||||
for div in div0.findAll('h4'):
|
||||
for div in div0.findAll(['h4','h5']):
|
||||
articles = []
|
||||
section_title = self.tag_to_string(div.findPrevious('h3')).strip()
|
||||
title=self.tag_to_string(div.a).strip()
|
||||
@ -48,7 +49,7 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
feeds[section_title] += articles
|
||||
div1 = soup.find ('div', attrs={'class':'column center'})
|
||||
section_title = ''
|
||||
for div in div1.findAll('h5'):
|
||||
for div in div1.findAll(['h4','h5']):
|
||||
articles = []
|
||||
desc=self.tag_to_string(div.findNext('p')).strip()
|
||||
section_title = self.tag_to_string(div.findPrevious('h3')).strip()
|
||||
|
@ -26,7 +26,7 @@ class ElDiplo_Recipe(BasicNewsRecipe):
|
||||
title = u'El Diplo'
|
||||
__author__ = 'Tomas Di Domenico'
|
||||
description = 'Publicacion mensual de Le Monde Diplomatique, edicion Argentina'
|
||||
langauge = 'es_AR'
|
||||
language = 'es_AR'
|
||||
needs_subscription = True
|
||||
auto_cleanup = True
|
||||
|
||||
|
@ -7,7 +7,8 @@ __author__ = 'teepel <teepel44@gmail.com>, Artur Stachecki <artur.stachecki@gmai
|
||||
equipped.pl
|
||||
'''
|
||||
|
||||
class equipped(AutomaticNewsRecipe):
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
class equipped(BasicNewsRecipe):
|
||||
title = u'Equipped'
|
||||
__author__ = 'teepel <teepel44@gmail.com>'
|
||||
language = 'pl'
|
||||
|
@ -1,6 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
__license__ = 'GPL v3'
|
||||
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class FocusRecipe(BasicNewsRecipe):
|
||||
|
@ -1,6 +1,5 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
import re
|
||||
from calibre.ptempfile import PersistentTemporaryFile
|
||||
|
||||
class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
''' there are three modifications:
|
||||
@ -45,7 +44,6 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
'publisher': publisher}
|
||||
|
||||
temp_files = []
|
||||
articles_are_obfuscated = True
|
||||
|
||||
def get_cover_url(self):
|
||||
soup = self.index_to_soup(self.FRONTPAGE)
|
||||
@ -53,20 +51,6 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
img_url = div.find('img')['src']
|
||||
return self.INDEX + img_url
|
||||
|
||||
def get_obfuscated_article(self, url):
|
||||
br = self.get_browser()
|
||||
br.open(url)
|
||||
|
||||
response = br.follow_link(url_regex = r'/print/[0-9]+', nr = 0)
|
||||
html = response.read()
|
||||
|
||||
self.temp_files.append(PersistentTemporaryFile('_fa.html'))
|
||||
self.temp_files[-1].write(html)
|
||||
self.temp_files[-1].close()
|
||||
|
||||
return self.temp_files[-1].name
|
||||
|
||||
|
||||
def parse_index(self):
|
||||
|
||||
answer = []
|
||||
@ -89,10 +73,10 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
if div.find('a') is not None:
|
||||
originalauthor=self.tag_to_string(div.findNext('div', attrs = {'class':'views-field-field-article-book-nid'}).div.a)
|
||||
title=subsectiontitle+': '+self.tag_to_string(div.span.a)+' by '+originalauthor
|
||||
url=self.INDEX+div.span.a['href']
|
||||
url=self.INDEX+self.index_to_soup(self.INDEX+div.span.a['href']).find('a', attrs={'class':'fa_addthis_print'})['href']
|
||||
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||
if atr is not None:
|
||||
author=self.tag_to_string(atr.span.a)
|
||||
author=self.tag_to_string(atr.span)
|
||||
else:
|
||||
author=''
|
||||
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||
@ -106,10 +90,10 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
for div in sec.findAll('div', attrs = {'class': 'views-field-title'}):
|
||||
if div.find('a') is not None:
|
||||
title=self.tag_to_string(div.span.a)
|
||||
url=self.INDEX+div.span.a['href']
|
||||
url=self.INDEX+self.index_to_soup(self.INDEX+div.span.a['href']).find('a', attrs={'class':'fa_addthis_print'})['href']
|
||||
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||
if atr is not None:
|
||||
author=self.tag_to_string(atr.span.a)
|
||||
author=self.tag_to_string(atr.span)
|
||||
else:
|
||||
author=''
|
||||
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||
@ -119,7 +103,7 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
||||
description=''
|
||||
articles.append({'title':title, 'date':None, 'url':url, 'description':description, 'author':author})
|
||||
if articles:
|
||||
answer.append((section, articles))
|
||||
answer.append((section, articles))
|
||||
return answer
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
|
75
recipes/fortune_magazine.recipe
Normal file
75
recipes/fortune_magazine.recipe
Normal file
@ -0,0 +1,75 @@
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
from collections import OrderedDict
|
||||
|
||||
class Fortune(BasicNewsRecipe):
|
||||
|
||||
title = 'Fortune Magazine'
|
||||
__author__ = 'Rick Shang'
|
||||
|
||||
description = 'FORTUNE is a global business magazine that has been revered in its content and credibility since 1930. FORTUNE covers the entire field of business, including specific companies and business trends, prominent business leaders, and new ideas shaping the global marketplace.'
|
||||
language = 'en'
|
||||
category = 'news'
|
||||
encoding = 'UTF-8'
|
||||
keep_only_tags = [dict(attrs={'id':['storycontent']})]
|
||||
remove_tags = [dict(attrs={'class':['hed_side','socialMediaToolbarContainer']})]
|
||||
no_javascript = True
|
||||
no_stylesheets = True
|
||||
needs_subscription = True
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser(self)
|
||||
br.open('http://money.cnn.com/2013/03/21/smallbusiness/legal-marijuana-startups.pr.fortune/index.html')
|
||||
br.select_form(name="paywall-form")
|
||||
br['email'] = self.username
|
||||
br['password'] = self.password
|
||||
br.submit()
|
||||
return br
|
||||
|
||||
def parse_index(self):
|
||||
articles = []
|
||||
soup0 = self.index_to_soup('http://money.cnn.com/magazines/fortune/')
|
||||
|
||||
#Go to the latestissue
|
||||
soup = self.index_to_soup(soup0.find('div',attrs={'class':'latestissue'}).find('a',href=True)['href'])
|
||||
|
||||
#Find cover & date
|
||||
cover_item = soup.find('div', attrs={'id':'cover-story'})
|
||||
cover = cover_item.find('img',src=True)
|
||||
self.cover_url = cover['src']
|
||||
date = self.tag_to_string(cover_item.find('div', attrs={'class':'tocDate'})).strip()
|
||||
self.timefmt = u' [%s]'%date
|
||||
|
||||
|
||||
feeds = OrderedDict()
|
||||
section_title = ''
|
||||
|
||||
#checkout the cover story
|
||||
articles = []
|
||||
coverstory=soup.find('div', attrs={'class':'cnnHeadline'})
|
||||
title=self.tag_to_string(coverstory.a).strip()
|
||||
url=coverstory.a['href']
|
||||
desc=self.tag_to_string(coverstory.findNext('p', attrs={'class':'cnnBlurbTxt'}))
|
||||
articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
|
||||
feeds['Cover Story'] = []
|
||||
feeds['Cover Story'] += articles
|
||||
|
||||
for post in soup.findAll('div', attrs={'class':'cnnheader'}):
|
||||
section_title = self.tag_to_string(post).strip()
|
||||
articles = []
|
||||
|
||||
ul=post.findNext('ul')
|
||||
for link in ul.findAll('li'):
|
||||
links=link.find('h2')
|
||||
title=self.tag_to_string(links.a).strip()
|
||||
url=links.a['href']
|
||||
desc=self.tag_to_string(link.find('p', attrs={'class':'cnnBlurbTxt'}))
|
||||
articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
|
||||
|
||||
if articles:
|
||||
if section_title not in feeds:
|
||||
feeds[section_title] = []
|
||||
feeds[section_title] += articles
|
||||
|
||||
ans = [(key, val) for key, val in feeds.iteritems()]
|
||||
return ans
|
||||
|
@ -9,7 +9,7 @@ gofin.pl
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class gofin(AutomaticNewsRecipe):
|
||||
class gofin(BasicNewsRecipe):
|
||||
title = u'Gofin'
|
||||
__author__ = 'teepel <teepel44@gmail.com>'
|
||||
language = 'pl'
|
||||
|
@ -77,10 +77,9 @@ class Harpers_full(BasicNewsRecipe):
|
||||
self.timefmt = u' [%s]'%date
|
||||
|
||||
#get cover
|
||||
coverurl='http://harpers.org/wp-content/themes/harpers/ajax_microfiche.php?img=harpers-'+re.split('harpers.org/',currentIssue_url)[1]+'gif/0001.gif'
|
||||
soup2 = self.index_to_soup(coverurl)
|
||||
self.cover_url = self.tag_to_string(soup2.find('img')['src'])
|
||||
self.cover_url = soup1.find('div', attrs = {'class':'picture_hp'}).find('img', src=True)['src']
|
||||
self.log(self.cover_url)
|
||||
|
||||
articles = []
|
||||
count = 0
|
||||
for item in soup1.findAll('div', attrs={'class':'articleData'}):
|
||||
|
@ -1,6 +1,6 @@
|
||||
import re
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
from collections import OrderedDict
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class HistoryToday(BasicNewsRecipe):
|
||||
|
||||
@ -19,7 +19,6 @@ class HistoryToday(BasicNewsRecipe):
|
||||
|
||||
|
||||
needs_subscription = True
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser(self)
|
||||
if self.username is not None and self.password is not None:
|
||||
@ -46,8 +45,9 @@ class HistoryToday(BasicNewsRecipe):
|
||||
|
||||
#Go to issue
|
||||
soup = self.index_to_soup('http://www.historytoday.com/contents')
|
||||
cover = soup.find('div',attrs={'id':'content-area'}).find('img')['src']
|
||||
cover = soup.find('div',attrs={'id':'content-area'}).find('img', attrs={'src':re.compile('.*cover.*')})['src']
|
||||
self.cover_url=cover
|
||||
self.log(self.cover_url)
|
||||
|
||||
#Go to the main body
|
||||
|
||||
@ -84,4 +84,3 @@ class HistoryToday(BasicNewsRecipe):
|
||||
|
||||
def cleanup(self):
|
||||
self.browser.open('http://www.historytoday.com/logout')
|
||||
|
||||
|
@ -12,7 +12,6 @@ http://www.ledevoir.com/
|
||||
import re
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.utils.magick import Image
|
||||
|
||||
class ledevoir(BasicNewsRecipe):
|
||||
author = 'Lorenzo Vigentini'
|
||||
@ -129,12 +128,12 @@ class ledevoir(BasicNewsRecipe):
|
||||
img = Image()
|
||||
img.open(iurl)
|
||||
# width, height = img.size
|
||||
# print 'img is: ', iurl, 'width is: ', width, 'height is: ', height
|
||||
# print 'img is: ', iurl, 'width is: ', width, 'height is: ', height
|
||||
if img < 0:
|
||||
raise RuntimeError('Out of memory')
|
||||
img.set_compression_quality(30)
|
||||
img.save(iurl)
|
||||
return soup
|
||||
'''
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -13,6 +13,7 @@ import re
|
||||
class presseurop(BasicNewsRecipe):
|
||||
title = u'Presseurop'
|
||||
description = u'Najlepsze artykuły z prasy europejskiej'
|
||||
language = 'pl'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 100
|
||||
auto_cleanup = True
|
||||
|
@ -67,12 +67,13 @@ class PsychologyToday(BasicNewsRecipe):
|
||||
title = title + u' (%s)'%author
|
||||
article_page= self.index_to_soup('http://www.psychologytoday.com'+post.find('a', href=True)['href'])
|
||||
print_page=article_page.find('li', attrs={'class':'print_html first'})
|
||||
url='http://www.psychologytoday.com'+print_page.find('a',href=True)['href']
|
||||
desc = self.tag_to_string(post.find('div', attrs={'class':'collection-node-description'})).strip()
|
||||
self.log('Found article:', title)
|
||||
self.log('\t', url)
|
||||
self.log('\t', desc)
|
||||
articles.append({'title':title, 'url':url, 'date':'','description':desc})
|
||||
if print_page is not None:
|
||||
url='http://www.psychologytoday.com'+print_page.find('a',href=True)['href']
|
||||
desc = self.tag_to_string(post.find('div', attrs={'class':'collection-node-description'})).strip()
|
||||
self.log('Found article:', title)
|
||||
self.log('\t', url)
|
||||
self.log('\t', desc)
|
||||
articles.append({'title':title, 'url':url, 'date':'','description':desc})
|
||||
|
||||
return [('Current Issue', articles)]
|
||||
|
||||
|
@ -23,8 +23,8 @@ class PublicoPT(BasicNewsRecipe):
|
||||
remove_empty_feeds = True
|
||||
extra_css = ' body{font-family: Arial,Helvetica,sans-serif } img{margin-bottom: 0.4em} '
|
||||
|
||||
keep_only_tags = [dict(attrs={'class':['content-noticia-title','artigoHeader','ECOSFERA_MANCHETE','noticia','textoPrincipal','ECOSFERA_texto_01']})]
|
||||
remove_tags = [dict(attrs={'class':['options','subcoluna']})]
|
||||
keep_only_tags = [dict(attrs={'class':['hentry article single']})]
|
||||
remove_tags = [dict(attrs={'class':['entry-options entry-options-above group','entry-options entry-options-below group', 'module tag-list']})]
|
||||
|
||||
feeds = [
|
||||
(u'Geral', u'http://feeds.feedburner.com/publicoRSS'),
|
||||
|
@ -3,7 +3,6 @@
|
||||
__license__ = 'GPL v3'
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.utils.magick import Image
|
||||
|
||||
class ResPublicaNowaRecipe(BasicNewsRecipe):
|
||||
__license__ = 'GPL v3'
|
||||
|
@ -6,10 +6,12 @@ class RTE(BasicNewsRecipe):
|
||||
max_articles_per_feed = 100
|
||||
__author__ = u'Robin Phillips'
|
||||
language = 'en_IE'
|
||||
auto_cleanup=True
|
||||
auto_cleanup_keep = '//figure[@class="photography gal642 single"]'
|
||||
|
||||
remove_tags = [dict(attrs={'class':['topAd','botad','previousNextItem','headline','footerLinks','footernav']})]
|
||||
|
||||
feeds = [(u'News', u'http://www.rte.ie/rss/news.xml'), (u'Sport', u'http://www.rte.ie/rss/sport.xml'), (u'Soccer', u'http://www.rte.ie/rss/soccer.xml'), (u'GAA', u'http://www.rte.ie/rss/gaa.xml'), (u'Rugby', u'http://www.rte.ie/rss/rugby.xml'), (u'Racing', u'http://www.rte.ie/rss/racing.xml'), (u'Business', u'http://www.rte.ie/rss/business.xml'), (u'Entertainment', u'http://www.rte.ie/rss/entertainment.xml')]
|
||||
|
||||
def print_version(self, url):
|
||||
return url.replace('http://www', 'http://m')
|
||||
#def print_version(self, url):
|
||||
#return url.replace('http://www', 'http://m')
|
||||
|
@ -8,7 +8,6 @@ sport.pl
|
||||
'''
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
import re
|
||||
|
||||
class sport_pl(BasicNewsRecipe):
|
||||
title = 'Sport.pl'
|
||||
|
@ -9,7 +9,7 @@ wolnemedia.net
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class wolne_media(AutomaticNewsRecipe):
|
||||
class wolne_media(BasicNewsRecipe):
|
||||
title = u'Wolne Media'
|
||||
__author__ = 'teepel <teepel44@gmail.com>'
|
||||
language = 'pl'
|
||||
|
Binary file not shown.
@ -79,7 +79,7 @@ author_name_copywords = ('Corporation', 'Company', 'Co.', 'Agency', 'Council',
|
||||
# By default, calibre splits a string containing multiple author names on
|
||||
# ampersands and the words "and" and "with". You can customize the splitting
|
||||
# by changing the regular expression below. Strings are split on whatever the
|
||||
# specified regular expression matches.
|
||||
# specified regular expression matches, in addition to ampersands.
|
||||
# Default: r'(?i),?\s+(and|with)\s+'
|
||||
authors_split_regex = r'(?i),?\s+(and|with)\s+'
|
||||
|
||||
|
1552
setup/iso_639/ca.po
1552
setup/iso_639/ca.po
File diff suppressed because it is too large
Load Diff
@ -13,14 +13,14 @@ msgstr ""
|
||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||
"devel@lists.alioth.debian.org>\n"
|
||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||
"PO-Revision-Date: 2013-02-21 23:51+0000\n"
|
||||
"PO-Revision-Date: 2013-03-23 10:17+0000\n"
|
||||
"Last-Translator: Глория Хрусталёва <gloriya@hushmail.com>\n"
|
||||
"Language-Team: Russian <debian-l10n-russian@lists.debian.org>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"X-Launchpad-Export-Date: 2013-02-23 05:19+0000\n"
|
||||
"X-Generator: Launchpad (build 16506)\n"
|
||||
"X-Launchpad-Export-Date: 2013-03-24 04:45+0000\n"
|
||||
"X-Generator: Launchpad (build 16540)\n"
|
||||
"Language: ru\n"
|
||||
|
||||
#. name for aaa
|
||||
@ -5381,7 +5381,7 @@ msgstr ""
|
||||
|
||||
#. name for cof
|
||||
msgid "Colorado"
|
||||
msgstr ""
|
||||
msgstr "Колорадо"
|
||||
|
||||
#. name for cog
|
||||
msgid "Chong"
|
||||
@ -5505,7 +5505,7 @@ msgstr ""
|
||||
|
||||
#. name for cqu
|
||||
msgid "Quechua; Chilean"
|
||||
msgstr ""
|
||||
msgstr "Кечуа; Чилийский"
|
||||
|
||||
#. name for cra
|
||||
msgid "Chara"
|
||||
|
@ -376,7 +376,7 @@ def random_user_agent(choose=None):
|
||||
choose = random.randint(0, len(choices)-1)
|
||||
return choices[choose]
|
||||
|
||||
def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None):
|
||||
def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None, use_robust_parser=False):
|
||||
'''
|
||||
Create a mechanize browser for web scraping. The browser handles cookies,
|
||||
refresh requests and ignores robots.txt. Also uses proxy if available.
|
||||
@ -385,7 +385,11 @@ def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None):
|
||||
:param max_time: Maximum time in seconds to wait during a refresh request
|
||||
'''
|
||||
from calibre.utils.browser import Browser
|
||||
opener = Browser()
|
||||
if use_robust_parser:
|
||||
import mechanize
|
||||
opener = Browser(factory=mechanize.RobustFactory())
|
||||
else:
|
||||
opener = Browser()
|
||||
opener.set_handle_refresh(True, max_time=max_time, honor_time=honor_time)
|
||||
opener.set_handle_robots(False)
|
||||
if user_agent is None:
|
||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
__appname__ = u'calibre'
|
||||
numeric_version = (0, 9, 24)
|
||||
numeric_version = (0, 9, 25)
|
||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||
|
||||
|
@ -757,6 +757,7 @@ from calibre.ebooks.metadata.sources.isbndb import ISBNDB
|
||||
from calibre.ebooks.metadata.sources.overdrive import OverDrive
|
||||
from calibre.ebooks.metadata.sources.douban import Douban
|
||||
from calibre.ebooks.metadata.sources.ozon import Ozon
|
||||
# from calibre.ebooks.metadata.sources.google_images import GoogleImages
|
||||
|
||||
plugins += [GoogleBooks, Amazon, Edelweiss, OpenLibrary, ISBNDB, OverDrive, Douban, Ozon]
|
||||
|
||||
@ -1296,15 +1297,6 @@ class StoreBeamEBooksDEStore(StoreBase):
|
||||
formats = ['EPUB', 'MOBI', 'PDF']
|
||||
affiliate = True
|
||||
|
||||
class StoreBeWriteStore(StoreBase):
|
||||
name = 'BeWrite Books'
|
||||
description = u'Publishers of fine books. Highly selective and editorially driven. Does not offer: books for children or exclusively YA, erotica, swords-and-sorcery fantasy and space-opera-style science fiction. All other genres are represented.'
|
||||
actual_plugin = 'calibre.gui2.store.stores.bewrite_plugin:BeWriteStore'
|
||||
|
||||
drm_free_only = True
|
||||
headquarters = 'US'
|
||||
formats = ['EPUB', 'MOBI', 'PDF']
|
||||
|
||||
class StoreBiblioStore(StoreBase):
|
||||
name = u'Библио.бг'
|
||||
author = 'Alex Stanev'
|
||||
@ -1677,7 +1669,6 @@ plugins += [
|
||||
StoreBaenWebScriptionStore,
|
||||
StoreBNStore,
|
||||
StoreBeamEBooksDEStore,
|
||||
StoreBeWriteStore,
|
||||
StoreBiblioStore,
|
||||
StoreBookotekaStore,
|
||||
StoreChitankaStore,
|
||||
|
@ -91,7 +91,7 @@ def restore_plugin_state_to_default(plugin_or_name):
|
||||
config['enabled_plugins'] = ep
|
||||
|
||||
default_disabled_plugins = set([
|
||||
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss',
|
||||
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss', 'Google Images',
|
||||
])
|
||||
|
||||
def is_disabled(plugin):
|
||||
|
@ -23,9 +23,11 @@ from calibre.ebooks.metadata import title_sort, author_to_author_sort
|
||||
from calibre.utils.icu import sort_key
|
||||
from calibre.utils.config import to_json, from_json, prefs, tweaks
|
||||
from calibre.utils.date import utcfromtimestamp, parse_date
|
||||
from calibre.utils.filenames import (is_case_sensitive, samefile, hardlink_file)
|
||||
from calibre.utils.filenames import (is_case_sensitive, samefile, hardlink_file, ascii_filename,
|
||||
WindowsAtomicFolderMove)
|
||||
from calibre.utils.recycle_bin import delete_tree
|
||||
from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
|
||||
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable,
|
||||
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable, PathTable,
|
||||
CompositeTable, LanguagesTable)
|
||||
# }}}
|
||||
|
||||
@ -672,7 +674,7 @@ class DB(object):
|
||||
if col == 'cover' else col)
|
||||
if not metadata['column']:
|
||||
metadata['column'] = col
|
||||
tables[col] = OneToOneTable(col, metadata)
|
||||
tables[col] = (PathTable if col == 'path' else OneToOneTable)(col, metadata)
|
||||
|
||||
for col in ('series', 'publisher', 'rating'):
|
||||
tables[col] = ManyToOneTable(col, self.field_metadata[col].copy())
|
||||
@ -778,6 +780,44 @@ class DB(object):
|
||||
self.user_version = 1
|
||||
# }}}
|
||||
|
||||
def normpath(self, path):
|
||||
path = os.path.abspath(os.path.realpath(path))
|
||||
if not self.is_case_sensitive:
|
||||
path = os.path.normcase(path).lower()
|
||||
return path
|
||||
|
||||
def rmtree(self, path, permanent=False):
|
||||
if not self.normpath(self.library_path).startswith(self.normpath(path)):
|
||||
delete_tree(path, permanent=permanent)
|
||||
|
||||
def construct_path_name(self, book_id, title, author):
|
||||
'''
|
||||
Construct the directory name for this book based on its metadata.
|
||||
'''
|
||||
author = ascii_filename(author
|
||||
)[:self.PATH_LIMIT].decode('ascii', 'replace')
|
||||
title = ascii_filename(title
|
||||
)[:self.PATH_LIMIT].decode('ascii', 'replace')
|
||||
while author[-1] in (' ', '.'):
|
||||
author = author[:-1]
|
||||
if not author:
|
||||
author = ascii_filename(_('Unknown')).decode(
|
||||
'ascii', 'replace')
|
||||
return '%s/%s (%d)'%(author, title, book_id)
|
||||
|
||||
def construct_file_name(self, book_id, title, author):
|
||||
'''
|
||||
Construct the file name for this book based on its metadata.
|
||||
'''
|
||||
author = ascii_filename(author
|
||||
)[:self.PATH_LIMIT].decode('ascii', 'replace')
|
||||
title = ascii_filename(title
|
||||
)[:self.PATH_LIMIT].decode('ascii', 'replace')
|
||||
name = title + ' - ' + author
|
||||
while name.endswith('.'):
|
||||
name = name[:-1]
|
||||
return name
|
||||
|
||||
# Database layer API {{{
|
||||
|
||||
def custom_table_names(self, num):
|
||||
@ -865,7 +905,7 @@ class DB(object):
|
||||
return self.format_abspath(book_id, fmt, fname, path) is not None
|
||||
|
||||
def copy_cover_to(self, path, dest, windows_atomic_move=None, use_hardlink=False):
|
||||
path = os.path.join(self.library_path, path, 'cover.jpg')
|
||||
path = os.path.abspath(os.path.join(self.library_path, path, 'cover.jpg'))
|
||||
if windows_atomic_move is not None:
|
||||
if not isinstance(dest, basestring):
|
||||
raise Exception("Error, you must pass the dest as a path when"
|
||||
@ -907,24 +947,125 @@ class DB(object):
|
||||
if not isinstance(dest, basestring):
|
||||
raise Exception("Error, you must pass the dest as a path when"
|
||||
" using windows_atomic_move")
|
||||
if dest and not samefile(dest, path):
|
||||
windows_atomic_move.copy_path_to(path, dest)
|
||||
if dest:
|
||||
if samefile(dest, path):
|
||||
# Ensure that the file has the same case as dest
|
||||
try:
|
||||
if path != dest:
|
||||
os.rename(path, dest)
|
||||
except:
|
||||
pass # Nothing too catastrophic happened, the cases mismatch, that's all
|
||||
else:
|
||||
windows_atomic_move.copy_path_to(path, dest)
|
||||
else:
|
||||
if hasattr(dest, 'write'):
|
||||
with lopen(path, 'rb') as f:
|
||||
shutil.copyfileobj(f, dest)
|
||||
if hasattr(dest, 'flush'):
|
||||
dest.flush()
|
||||
elif dest and not samefile(dest, path):
|
||||
if use_hardlink:
|
||||
try:
|
||||
hardlink_file(path, dest)
|
||||
return True
|
||||
except:
|
||||
pass
|
||||
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
|
||||
shutil.copyfileobj(f, d)
|
||||
elif dest:
|
||||
if samefile(dest, path):
|
||||
if not self.is_case_sensitive and path != dest:
|
||||
# Ensure that the file has the same case as dest
|
||||
try:
|
||||
os.rename(path, dest)
|
||||
except:
|
||||
pass # Nothing too catastrophic happened, the cases mismatch, that's all
|
||||
else:
|
||||
if use_hardlink:
|
||||
try:
|
||||
hardlink_file(path, dest)
|
||||
return True
|
||||
except:
|
||||
pass
|
||||
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
|
||||
shutil.copyfileobj(f, d)
|
||||
return True
|
||||
|
||||
def windows_check_if_files_in_use(self, paths):
|
||||
'''
|
||||
Raises an EACCES IOError if any of the files in the folder of book_id
|
||||
are opened in another program on windows.
|
||||
'''
|
||||
if iswindows:
|
||||
for path in paths:
|
||||
spath = os.path.join(self.library_path, *path.split('/'))
|
||||
wam = None
|
||||
if os.path.exists(spath):
|
||||
try:
|
||||
wam = WindowsAtomicFolderMove(spath)
|
||||
finally:
|
||||
if wam is not None:
|
||||
wam.close_handles()
|
||||
|
||||
def update_path(self, book_id, title, author, path_field, formats_field):
|
||||
path = self.construct_path_name(book_id, title, author)
|
||||
current_path = path_field.for_book(book_id)
|
||||
formats = formats_field.for_book(book_id, default_value=())
|
||||
fname = self.construct_file_name(book_id, title, author)
|
||||
# Check if the metadata used to construct paths has changed
|
||||
changed = False
|
||||
for fmt in formats:
|
||||
name = formats_field.format_fname(book_id, fmt)
|
||||
if name and name != fname:
|
||||
changed = True
|
||||
break
|
||||
if path == current_path and not changed:
|
||||
return
|
||||
spath = os.path.join(self.library_path, *current_path.split('/'))
|
||||
tpath = os.path.join(self.library_path, *path.split('/'))
|
||||
|
||||
source_ok = current_path and os.path.exists(spath)
|
||||
wam = WindowsAtomicFolderMove(spath) if iswindows and source_ok else None
|
||||
try:
|
||||
if not os.path.exists(tpath):
|
||||
os.makedirs(tpath)
|
||||
|
||||
if source_ok: # Migrate existing files
|
||||
dest = os.path.join(tpath, 'cover.jpg')
|
||||
self.copy_cover_to(current_path, dest,
|
||||
windows_atomic_move=wam, use_hardlink=True)
|
||||
for fmt in formats:
|
||||
dest = os.path.join(tpath, fname+'.'+fmt.lower())
|
||||
self.copy_format_to(book_id, fmt, formats_field.format_fname(book_id, fmt), current_path,
|
||||
dest, windows_atomic_move=wam, use_hardlink=True)
|
||||
# Update db to reflect new file locations
|
||||
for fmt in formats:
|
||||
formats_field.table.set_fname(book_id, fmt, fname, self)
|
||||
path_field.table.set_path(book_id, path, self)
|
||||
|
||||
# Delete not needed directories
|
||||
if source_ok:
|
||||
if os.path.exists(spath) and not samefile(spath, tpath):
|
||||
if wam is not None:
|
||||
wam.delete_originals()
|
||||
self.rmtree(spath, permanent=True)
|
||||
parent = os.path.dirname(spath)
|
||||
if len(os.listdir(parent)) == 0:
|
||||
self.rmtree(parent, permanent=True)
|
||||
finally:
|
||||
if wam is not None:
|
||||
wam.close_handles()
|
||||
|
||||
curpath = self.library_path
|
||||
c1, c2 = current_path.split('/'), path.split('/')
|
||||
if not self.is_case_sensitive and len(c1) == len(c2):
|
||||
# On case-insensitive systems, title and author renames that only
|
||||
# change case don't cause any changes to the directories in the file
|
||||
# system. This can lead to having the directory names not match the
|
||||
# title/author, which leads to trouble when libraries are copied to
|
||||
# a case-sensitive system. The following code attempts to fix this
|
||||
# by checking each segment. If they are different because of case,
|
||||
# then rename the segment. Note that the code above correctly
|
||||
# handles files in the directories, so no need to do them here.
|
||||
for oldseg, newseg in zip(c1, c2):
|
||||
if oldseg.lower() == newseg.lower() and oldseg != newseg:
|
||||
try:
|
||||
os.rename(os.path.join(curpath, oldseg),
|
||||
os.path.join(curpath, newseg))
|
||||
except:
|
||||
break # Fail silently since nothing catastrophic has happened
|
||||
curpath = os.path.join(curpath, newseg)
|
||||
|
||||
# }}}
|
||||
|
||||
|
@ -12,6 +12,7 @@ from io import BytesIO
|
||||
from collections import defaultdict
|
||||
from functools import wraps, partial
|
||||
|
||||
from calibre.constants import iswindows
|
||||
from calibre.db import SPOOL_SIZE
|
||||
from calibre.db.categories import get_categories
|
||||
from calibre.db.locking import create_locks, RecordLock
|
||||
@ -219,6 +220,8 @@ class Cache(object):
|
||||
field.series_field = self.fields['series']
|
||||
elif name == 'authors':
|
||||
field.author_sort_field = self.fields['author_sort']
|
||||
elif name == 'title':
|
||||
field.title_sort_field = self.fields['sort']
|
||||
|
||||
@read_api
|
||||
def field_for(self, name, book_id, default_value=None):
|
||||
@ -619,11 +622,12 @@ class Cache(object):
|
||||
|
||||
@write_api
|
||||
def set_field(self, name, book_id_to_val_map, allow_case_change=True):
|
||||
# TODO: Specialize title/authors to also update path
|
||||
# TODO: Handle updating caches used by composite fields
|
||||
# TODO: Ensure the sort fields are updated for title/author/series?
|
||||
f = self.fields[name]
|
||||
is_series = f.metadata['datatype'] == 'series'
|
||||
update_path = name in {'title', 'authors'}
|
||||
if update_path and iswindows:
|
||||
paths = (x for x in (self._field_for('path', book_id) for book_id in book_id_to_val_map) if x)
|
||||
self.backend.windows_check_if_files_in_use(paths)
|
||||
|
||||
if is_series:
|
||||
bimap, simap = {}, {}
|
||||
@ -646,11 +650,31 @@ class Cache(object):
|
||||
sf = self.fields[f.name+'_index']
|
||||
dirtied |= sf.writer.set_books(simap, self.backend, allow_case_change=False)
|
||||
|
||||
if dirtied and self.composites:
|
||||
for name in self.composites:
|
||||
self.fields[name].pop_cache(dirtied)
|
||||
|
||||
if dirtied and update_path:
|
||||
self._update_path(dirtied, mark_as_dirtied=False)
|
||||
|
||||
# TODO: Mark these as dirtied so that the opf is regenerated
|
||||
|
||||
return dirtied
|
||||
|
||||
@write_api
|
||||
def update_path(self, book_ids, mark_as_dirtied=True):
|
||||
for book_id in book_ids:
|
||||
title = self._field_for('title', book_id, default_value=_('Unknown'))
|
||||
author = self._field_for('authors', book_id, default_value=(_('Unknown'),))[0]
|
||||
self.backend.update_path(book_id, title, author, self.fields['path'], self.fields['formats'])
|
||||
if mark_as_dirtied:
|
||||
pass
|
||||
# TODO: Mark these books as dirtied so that metadata.opf is
|
||||
# re-created
|
||||
|
||||
# }}}
|
||||
|
||||
class SortKey(object):
|
||||
class SortKey(object): # {{{
|
||||
|
||||
def __init__(self, fields, sort_keys, book_id):
|
||||
self.orders = tuple(1 if f[1] else -1 for f in fields)
|
||||
@ -662,19 +686,5 @@ class SortKey(object):
|
||||
if ans != 0:
|
||||
return ans * order
|
||||
return 0
|
||||
|
||||
|
||||
# Testing {{{
|
||||
|
||||
def test(library_path):
|
||||
from calibre.db.backend import DB
|
||||
backend = DB(library_path)
|
||||
cache = Cache(backend)
|
||||
cache.init()
|
||||
print ('All book ids:', cache.all_book_ids())
|
||||
|
||||
if __name__ == '__main__':
|
||||
from calibre.utils.config import prefs
|
||||
test(prefs['library_path'])
|
||||
|
||||
# }}}
|
||||
|
||||
|
@ -167,9 +167,10 @@ class CompositeField(OneToOneField):
|
||||
with self._lock:
|
||||
self._render_cache = {}
|
||||
|
||||
def pop_cache(self, book_id):
|
||||
def pop_cache(self, book_ids):
|
||||
with self._lock:
|
||||
self._render_cache.pop(book_id, None)
|
||||
for book_id in book_ids:
|
||||
self._render_cache.pop(book_id, None)
|
||||
|
||||
def get_value_with_cache(self, book_id, get_metadata):
|
||||
with self._lock:
|
||||
@ -177,6 +178,8 @@ class CompositeField(OneToOneField):
|
||||
if ans is None:
|
||||
mi = get_metadata(book_id)
|
||||
ans = mi.get('#'+self.metadata['label'])
|
||||
with self._lock:
|
||||
self._render_cache[book_id] = ans
|
||||
return ans
|
||||
|
||||
def sort_keys_for_books(self, get_metadata, lang_map, all_book_ids):
|
||||
|
@ -13,7 +13,6 @@ from dateutil.tz import tzoffset
|
||||
|
||||
from calibre.constants import plugins
|
||||
from calibre.utils.date import parse_date, local_tz, UNDEFINED_DATE
|
||||
from calibre.utils.localization import lang_map
|
||||
from calibre.ebooks.metadata import author_to_author_sort
|
||||
|
||||
_c_speedup = plugins['speedup'][0]
|
||||
@ -83,6 +82,13 @@ class OneToOneTable(Table):
|
||||
self.metadata['column'], self.metadata['table'])):
|
||||
self.book_col_map[row[0]] = self.unserialize(row[1])
|
||||
|
||||
class PathTable(OneToOneTable):
|
||||
|
||||
def set_path(self, book_id, path, db):
|
||||
self.book_col_map[book_id] = path
|
||||
db.conn.execute('UPDATE books SET path=? WHERE id=?',
|
||||
(path, book_id))
|
||||
|
||||
class SizeTable(OneToOneTable):
|
||||
|
||||
def read(self, db):
|
||||
@ -144,7 +150,7 @@ class ManyToManyTable(ManyToOneTable):
|
||||
'''
|
||||
|
||||
table_type = MANY_MANY
|
||||
selectq = 'SELECT book, {0} FROM {1}'
|
||||
selectq = 'SELECT book, {0} FROM {1} ORDER BY id'
|
||||
|
||||
def read_maps(self, db):
|
||||
for row in db.conn.execute(
|
||||
@ -161,8 +167,6 @@ class ManyToManyTable(ManyToOneTable):
|
||||
|
||||
class AuthorsTable(ManyToManyTable):
|
||||
|
||||
selectq = 'SELECT book, {0} FROM {1} ORDER BY id'
|
||||
|
||||
def read_id_maps(self, db):
|
||||
self.alink_map = {}
|
||||
self.asort_map = {}
|
||||
@ -196,6 +200,11 @@ class FormatsTable(ManyToManyTable):
|
||||
for key in tuple(self.book_col_map.iterkeys()):
|
||||
self.book_col_map[key] = tuple(sorted(self.book_col_map[key]))
|
||||
|
||||
def set_fname(self, book_id, fmt, fname, db):
|
||||
self.fname_map[book_id][fmt] = fname
|
||||
db.conn.execute('UPDATE data SET name=? WHERE book=? AND format=?',
|
||||
(fname, book_id, fmt))
|
||||
|
||||
class IdentifiersTable(ManyToManyTable):
|
||||
|
||||
def read_id_maps(self, db):
|
||||
@ -215,6 +224,3 @@ class LanguagesTable(ManyToManyTable):
|
||||
|
||||
def read_id_maps(self, db):
|
||||
ManyToManyTable.read_id_maps(self, db)
|
||||
lm = lang_map()
|
||||
self.lang_name_map = {x:lm.get(x, x) for x in self.id_map.itervalues()}
|
||||
|
||||
|
@ -7,7 +7,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest, os, shutil, tempfile, atexit
|
||||
import unittest, os, shutil, tempfile, atexit, gc
|
||||
from functools import partial
|
||||
from io import BytesIO
|
||||
from future_builtins import map
|
||||
@ -21,6 +21,7 @@ class BaseTest(unittest.TestCase):
|
||||
self.create_db(self.library_path)
|
||||
|
||||
def tearDown(self):
|
||||
gc.collect(), gc.collect()
|
||||
shutil.rmtree(self.library_path)
|
||||
|
||||
def create_db(self, library_path):
|
||||
@ -36,6 +37,7 @@ class BaseTest(unittest.TestCase):
|
||||
db.add_format(1, 'FMT1', BytesIO(b'book1fmt1'), index_is_id=True)
|
||||
db.add_format(1, 'FMT2', BytesIO(b'book1fmt2'), index_is_id=True)
|
||||
db.add_format(2, 'FMT1', BytesIO(b'book2fmt1'), index_is_id=True)
|
||||
db.conn.close()
|
||||
return dest
|
||||
|
||||
def init_cache(self, library_path):
|
||||
@ -65,6 +67,10 @@ class BaseTest(unittest.TestCase):
|
||||
shutil.copytree(library_path, dest)
|
||||
return dest
|
||||
|
||||
@property
|
||||
def cloned_library(self):
|
||||
return self.clone_library(self.library_path)
|
||||
|
||||
def compare_metadata(self, mi1, mi2):
|
||||
allfk1 = mi1.all_field_keys()
|
||||
allfk2 = mi2.all_field_keys()
|
||||
@ -79,6 +85,8 @@ class BaseTest(unittest.TestCase):
|
||||
attr1, attr2 = getattr(mi1, attr), getattr(mi2, attr)
|
||||
if attr == 'formats':
|
||||
attr1, attr2 = map(lambda x:tuple(x) if x else (), (attr1, attr2))
|
||||
if isinstance(attr1, (tuple, list)) and 'authors' not in attr and 'languages' not in attr:
|
||||
attr1, attr2 = set(attr1), set(attr2)
|
||||
self.assertEqual(attr1, attr2,
|
||||
'%s not the same: %r != %r'%(attr, attr1, attr2))
|
||||
if attr.startswith('#'):
|
||||
|
82
src/calibre/db/tests/filesystem.py
Normal file
82
src/calibre/db/tests/filesystem.py
Normal file
@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest, os
|
||||
from io import BytesIO
|
||||
|
||||
from calibre.constants import iswindows
|
||||
from calibre.db.tests.base import BaseTest
|
||||
|
||||
class FilesystemTest(BaseTest):
|
||||
|
||||
def get_filesystem_data(self, cache, book_id):
|
||||
fmts = cache.field_for('formats', book_id)
|
||||
ans = {}
|
||||
for fmt in fmts:
|
||||
buf = BytesIO()
|
||||
if cache.copy_format_to(book_id, fmt, buf):
|
||||
ans[fmt] = buf.getvalue()
|
||||
buf = BytesIO()
|
||||
if cache.copy_cover_to(book_id, buf):
|
||||
ans['cover'] = buf.getvalue()
|
||||
return ans
|
||||
|
||||
def test_metadata_move(self):
|
||||
'Test the moving of files when title/author change'
|
||||
cl = self.cloned_library
|
||||
cache = self.init_cache(cl)
|
||||
ae, af, sf = self.assertEqual, self.assertFalse, cache.set_field
|
||||
|
||||
# Test that changing metadata on a book with no formats/cover works
|
||||
ae(sf('title', {3:'moved1'}), set([3]))
|
||||
ae(sf('authors', {3:'moved1'}), set([3]))
|
||||
ae(sf('title', {3:'Moved1'}), set([3]))
|
||||
ae(sf('authors', {3:'Moved1'}), set([3]))
|
||||
ae(cache.field_for('title', 3), 'Moved1')
|
||||
ae(cache.field_for('authors', 3), ('Moved1',))
|
||||
|
||||
# Now try with a book that has covers and formats
|
||||
orig_data = self.get_filesystem_data(cache, 1)
|
||||
orig_fpath = cache.format_abspath(1, 'FMT1')
|
||||
ae(sf('title', {1:'moved'}), set([1]))
|
||||
ae(sf('authors', {1:'moved'}), set([1]))
|
||||
ae(sf('title', {1:'Moved'}), set([1]))
|
||||
ae(sf('authors', {1:'Moved'}), set([1]))
|
||||
ae(cache.field_for('title', 1), 'Moved')
|
||||
ae(cache.field_for('authors', 1), ('Moved',))
|
||||
cache2 = self.init_cache(cl)
|
||||
for c in (cache, cache2):
|
||||
data = self.get_filesystem_data(c, 1)
|
||||
ae(set(orig_data.iterkeys()), set(data.iterkeys()))
|
||||
ae(orig_data, data, 'Filesystem data does not match')
|
||||
ae(c.field_for('path', 1), 'Moved/Moved (1)')
|
||||
ae(c.field_for('path', 3), 'Moved1/Moved1 (3)')
|
||||
fpath = c.format_abspath(1, 'FMT1').replace(os.sep, '/').split('/')
|
||||
ae(fpath[-3:], ['Moved', 'Moved (1)', 'Moved - Moved.fmt1'])
|
||||
af(os.path.exists(os.path.dirname(orig_fpath)), 'Original book folder still exists')
|
||||
# Check that the filesystem reflects fpath (especially on
|
||||
# case-insensitive systems).
|
||||
for x in range(1, 4):
|
||||
base = os.sep.join(fpath[:-x])
|
||||
part = fpath[-x:][0]
|
||||
self.assertIn(part, os.listdir(base))
|
||||
|
||||
@unittest.skipUnless(iswindows, 'Windows only')
|
||||
def test_windows_atomic_move(self):
|
||||
'Test book file open in another process when changing metadata'
|
||||
cl = self.cloned_library
|
||||
cache = self.init_cache(cl)
|
||||
fpath = cache.format_abspath(1, 'FMT1')
|
||||
f = open(fpath, 'rb')
|
||||
with self.assertRaises(IOError):
|
||||
cache.set_field('title', {1:'Moved'})
|
||||
f.close()
|
||||
self.assertNotEqual(cache.field_for('title', 1), 'Moved', 'Title was changed despite file lock')
|
||||
|
||||
|
23
src/calibre/db/tests/main.py
Normal file
23
src/calibre/db/tests/main.py
Normal file
@ -0,0 +1,23 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest, os, argparse
|
||||
|
||||
def find_tests():
|
||||
return unittest.defaultTestLoader.discover(os.path.dirname(os.path.abspath(__file__)), pattern='*.py')
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('name', nargs='?', default=None, help='The name of the test to run, for e.g. writing.WritingTest.many_many_basic')
|
||||
args = parser.parse_args()
|
||||
if args.name:
|
||||
unittest.TextTestRunner(verbosity=4).run(unittest.defaultTestLoader.loadTestsFromName(args.name))
|
||||
else:
|
||||
unittest.TextTestRunner(verbosity=4).run(find_tests())
|
||||
|
@ -7,7 +7,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest, datetime
|
||||
import datetime
|
||||
|
||||
from calibre.utils.date import utc_tz
|
||||
from calibre.db.tests.base import BaseTest
|
||||
@ -115,6 +115,8 @@ class ReadingTest(BaseTest):
|
||||
for book_id, test in tests.iteritems():
|
||||
for field, expected_val in test.iteritems():
|
||||
val = cache.field_for(field, book_id)
|
||||
if isinstance(val, tuple) and 'authors' not in field and 'languages' not in field:
|
||||
val, expected_val = set(val), set(expected_val)
|
||||
self.assertEqual(expected_val, val,
|
||||
'Book id: %d Field: %s failed: %r != %r'%(
|
||||
book_id, field, expected_val, val))
|
||||
@ -173,6 +175,7 @@ class ReadingTest(BaseTest):
|
||||
mi.format_metadata = dict(mi.format_metadata)
|
||||
if mi.formats:
|
||||
mi.formats = tuple(mi.formats)
|
||||
old.conn.close()
|
||||
old = None
|
||||
|
||||
cache = self.init_cache(self.library_path)
|
||||
@ -189,6 +192,7 @@ class ReadingTest(BaseTest):
|
||||
from calibre.library.database2 import LibraryDatabase2
|
||||
old = LibraryDatabase2(self.library_path)
|
||||
covers = {i: old.cover(i, index_is_id=True) for i in old.all_ids()}
|
||||
old.conn.close()
|
||||
old = None
|
||||
cache = self.init_cache(self.library_path)
|
||||
for book_id, cdata in covers.iteritems():
|
||||
@ -247,6 +251,7 @@ class ReadingTest(BaseTest):
|
||||
'#formats:fmt1', '#formats:fmt2', '#formats:fmt1 and #formats:fmt2',
|
||||
|
||||
)}
|
||||
old.conn.close()
|
||||
old = None
|
||||
|
||||
cache = self.init_cache(self.library_path)
|
||||
@ -263,6 +268,7 @@ class ReadingTest(BaseTest):
|
||||
from calibre.library.database2 import LibraryDatabase2
|
||||
old = LibraryDatabase2(self.library_path)
|
||||
old_categories = old.get_categories()
|
||||
old.conn.close()
|
||||
cache = self.init_cache(self.library_path)
|
||||
new_categories = cache.get_categories()
|
||||
self.assertEqual(set(old_categories), set(new_categories),
|
||||
@ -305,6 +311,7 @@ class ReadingTest(BaseTest):
|
||||
i, index_is_id=True) else set() for i in ids}
|
||||
formats = {i:{f:old.format(i, f, index_is_id=True) for f in fmts} for
|
||||
i, fmts in lf.iteritems()}
|
||||
old.conn.close()
|
||||
old = None
|
||||
cache = self.init_cache(self.library_path)
|
||||
for book_id, fmts in lf.iteritems():
|
||||
@ -328,12 +335,3 @@ class ReadingTest(BaseTest):
|
||||
|
||||
# }}}
|
||||
|
||||
def tests():
|
||||
return unittest.TestLoader().loadTestsFromTestCase(ReadingTest)
|
||||
|
||||
def run():
|
||||
unittest.TextTestRunner(verbosity=2).run(tests())
|
||||
|
||||
if __name__ == '__main__':
|
||||
run()
|
||||
|
||||
|
@ -7,19 +7,15 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest
|
||||
from collections import namedtuple
|
||||
from functools import partial
|
||||
|
||||
from calibre.ebooks.metadata import author_to_author_sort
|
||||
from calibre.utils.date import UNDEFINED_DATE
|
||||
from calibre.db.tests.base import BaseTest
|
||||
|
||||
class WritingTest(BaseTest):
|
||||
|
||||
@property
|
||||
def cloned_library(self):
|
||||
return self.clone_library(self.library_path)
|
||||
|
||||
def create_getter(self, name, getter=None):
|
||||
if getter is None:
|
||||
if name.endswith('_index'):
|
||||
@ -214,7 +210,7 @@ class WritingTest(BaseTest):
|
||||
{1, 2})
|
||||
for name in ('tags', '#tags'):
|
||||
f = cache.fields[name]
|
||||
af(sf(name, {1:('tag one', 'News')}, allow_case_change=False))
|
||||
af(sf(name, {1:('News', 'tag one')}, allow_case_change=False))
|
||||
ae(sf(name, {1:'tag one, News'}), {1, 2})
|
||||
ae(sf(name, {3:('tag two', 'sep,sep2')}), {2, 3})
|
||||
ae(len(f.table.id_map), 4)
|
||||
@ -225,7 +221,7 @@ class WritingTest(BaseTest):
|
||||
ae(len(c.fields[name].table.id_map), 3)
|
||||
ae(len(c.fields[name].table.id_map), 3)
|
||||
ae(c.field_for(name, 1), ())
|
||||
ae(c.field_for(name, 2), ('tag one', 'tag two'))
|
||||
ae(c.field_for(name, 2), ('tag two', 'tag one'))
|
||||
del cache2
|
||||
|
||||
# Authors
|
||||
@ -244,9 +240,10 @@ class WritingTest(BaseTest):
|
||||
ae(c.field_for(name, 3), ('Kovid Goyal', 'Divok Layog'))
|
||||
ae(c.field_for(name, 2), ('An, Author',))
|
||||
ae(c.field_for(name, 1), ('Unknown',) if name=='authors' else ())
|
||||
ae(c.field_for('author_sort', 1), 'Unknown')
|
||||
ae(c.field_for('author_sort', 2), 'An, Author')
|
||||
ae(c.field_for('author_sort', 3), 'Goyal, Kovid & Layog, Divok')
|
||||
if name == 'authors':
|
||||
ae(c.field_for('author_sort', 1), author_to_author_sort('Unknown'))
|
||||
ae(c.field_for('author_sort', 2), author_to_author_sort('An, Author'))
|
||||
ae(c.field_for('author_sort', 3), author_to_author_sort('Kovid Goyal') + ' & ' + author_to_author_sort('Divok Layog'))
|
||||
del cache2
|
||||
ae(cache.set_field('authors', {1:'KoviD GoyaL'}), {1, 3})
|
||||
ae(cache.field_for('author_sort', 1), 'GoyaL, KoviD')
|
||||
@ -265,20 +262,33 @@ class WritingTest(BaseTest):
|
||||
ae(cache.field_for('languages', 3), ('eng',))
|
||||
ae(sf('languages', {3:None}), set([3]))
|
||||
ae(cache.field_for('languages', 3), ())
|
||||
ae(sf('languages', {1:'deu,fra,eng'}), set([1]), 'Changing order failed')
|
||||
ae(sf('languages', {2:'deu,eng,eng'}), set([2]))
|
||||
cache2 = self.init_cache(cl)
|
||||
for c in (cache, cache2):
|
||||
ae(cache.field_for('languages', 1), ('deu', 'fra', 'eng'))
|
||||
ae(cache.field_for('languages', 2), ('deu', 'eng'))
|
||||
del cache2
|
||||
|
||||
# Identifiers
|
||||
f = cache.fields['identifiers']
|
||||
ae(sf('identifiers', {3: 'one:1,two:2'}), set([3]))
|
||||
ae(sf('identifiers', {2:None}), set([2]))
|
||||
ae(sf('identifiers', {1: {'test':'1', 'two':'2'}}), set([1]))
|
||||
cache2 = self.init_cache(cl)
|
||||
for c in (cache, cache2):
|
||||
ae(c.field_for('identifiers', 3), {'one':'1', 'two':'2'})
|
||||
ae(c.field_for('identifiers', 2), {})
|
||||
ae(c.field_for('identifiers', 1), {'test':'1', 'two':'2'})
|
||||
del cache2
|
||||
|
||||
# Test setting of title sort
|
||||
ae(sf('title', {1:'The Moose', 2:'Cat'}), {1, 2})
|
||||
cache2 = self.init_cache(cl)
|
||||
for c in (cache, cache2):
|
||||
ae(c.field_for('sort', 1), 'Moose, The')
|
||||
ae(c.field_for('sort', 2), 'Cat')
|
||||
|
||||
# TODO: identifiers
|
||||
|
||||
# }}}
|
||||
|
||||
def tests():
|
||||
tl = unittest.TestLoader()
|
||||
# return tl.loadTestsFromName('writing.WritingTest.test_many_many_basic')
|
||||
return tl.loadTestsFromTestCase(WritingTest)
|
||||
|
||||
def run():
|
||||
unittest.TextTestRunner(verbosity=2).run(tests())
|
||||
|
||||
if __name__ == '__main__':
|
||||
run()
|
||||
|
||||
|
||||
|
@ -7,7 +7,9 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import weakref
|
||||
from functools import partial
|
||||
from itertools import izip, imap
|
||||
|
||||
def sanitize_sort_field_name(field_metadata, field):
|
||||
field = field_metadata.search_term_to_field_key(field.lower().strip())
|
||||
@ -15,11 +17,39 @@ def sanitize_sort_field_name(field_metadata, field):
|
||||
field = {'title': 'sort', 'authors':'author_sort'}.get(field, field)
|
||||
return field
|
||||
|
||||
class MarkedVirtualField(object):
|
||||
|
||||
def __init__(self, marked_ids):
|
||||
self.marked_ids = marked_ids
|
||||
|
||||
def iter_searchable_values(self, get_metadata, candidates, default_value=None):
|
||||
for book_id in candidates:
|
||||
yield self.marked_ids.get(book_id, default_value), {book_id}
|
||||
|
||||
class TableRow(list):
|
||||
|
||||
def __init__(self, book_id, view):
|
||||
self.book_id = book_id
|
||||
self.view = weakref.ref(view)
|
||||
|
||||
def __getitem__(self, obj):
|
||||
view = self.view()
|
||||
if isinstance(obj, slice):
|
||||
return [view._field_getters[c](self.book_id)
|
||||
for c in xrange(*obj.indices(len(view._field_getters)))]
|
||||
else:
|
||||
return view._field_getters[obj](self.book_id)
|
||||
|
||||
class View(object):
|
||||
|
||||
''' A table view of the database, with rows and columns. Also supports
|
||||
filtering and sorting. '''
|
||||
|
||||
def __init__(self, cache):
|
||||
self.cache = cache
|
||||
self.marked_ids = {}
|
||||
self.search_restriction_book_count = 0
|
||||
self.search_restriction = ''
|
||||
self._field_getters = {}
|
||||
for col, idx in cache.backend.FIELD_MAP.iteritems():
|
||||
if isinstance(col, int):
|
||||
@ -38,16 +68,33 @@ class View(object):
|
||||
except KeyError:
|
||||
self._field_getters[idx] = partial(self.get, col)
|
||||
|
||||
self._map = list(self.cache.all_book_ids())
|
||||
self._map_filtered = list(self._map)
|
||||
self._map = tuple(self.cache.all_book_ids())
|
||||
self._map_filtered = tuple(self._map)
|
||||
|
||||
@property
|
||||
def field_metadata(self):
|
||||
return self.cache.field_metadata
|
||||
|
||||
def _get_id(self, idx, index_is_id=True):
|
||||
ans = idx if index_is_id else self.index_to_id(idx)
|
||||
return ans
|
||||
return idx if index_is_id else self.index_to_id(idx)
|
||||
|
||||
def __getitem__(self, row):
|
||||
return TableRow(self._map_filtered[row], self.cache)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._map_filtered)
|
||||
|
||||
def __iter__(self):
|
||||
for book_id in self._map_filtered:
|
||||
yield self._data[book_id]
|
||||
|
||||
def iterall(self):
|
||||
for book_id in self._map:
|
||||
yield self[book_id]
|
||||
|
||||
def iterallids(self):
|
||||
for book_id in self._map:
|
||||
yield book_id
|
||||
|
||||
def get_field_map_field(self, row, col, index_is_id=True):
|
||||
'''
|
||||
@ -66,7 +113,7 @@ class View(object):
|
||||
|
||||
def get_ondevice(self, idx, index_is_id=True, default_value=''):
|
||||
id_ = idx if index_is_id else self.index_to_id(idx)
|
||||
self.cache.field_for('ondevice', id_, default_value=default_value)
|
||||
return self.cache.field_for('ondevice', id_, default_value=default_value)
|
||||
|
||||
def get_marked(self, idx, index_is_id=True, default_value=None):
|
||||
id_ = idx if index_is_id else self.index_to_id(idx)
|
||||
@ -93,7 +140,7 @@ class View(object):
|
||||
ans.append(self.cache._author_data(id_))
|
||||
return tuple(ans)
|
||||
|
||||
def multisort(self, fields=[], subsort=False):
|
||||
def multisort(self, fields=[], subsort=False, only_ids=None):
|
||||
fields = [(sanitize_sort_field_name(self.field_metadata, x), bool(y)) for x, y in fields]
|
||||
keys = self.field_metadata.sortable_field_keys()
|
||||
fields = [x for x in fields if x[0] in keys]
|
||||
@ -102,8 +149,70 @@ class View(object):
|
||||
if not fields:
|
||||
fields = [('timestamp', False)]
|
||||
|
||||
sorted_book_ids = self.cache.multisort(fields)
|
||||
sorted_book_ids
|
||||
# TODO: change maps
|
||||
sorted_book_ids = self.cache.multisort(fields, ids_to_sort=only_ids)
|
||||
if only_ids is None:
|
||||
self._map = tuple(sorted_book_ids)
|
||||
if len(self._map_filtered) == len(self._map):
|
||||
self._map_filtered = tuple(self._map)
|
||||
else:
|
||||
fids = frozenset(self._map_filtered)
|
||||
self._map_filtered = tuple(i for i in self._map if i in fids)
|
||||
else:
|
||||
smap = {book_id:i for i, book_id in enumerate(sorted_book_ids)}
|
||||
only_ids.sort(key=smap.get)
|
||||
|
||||
def search(self, query, return_matches=False):
|
||||
ans = self.search_getting_ids(query, self.search_restriction,
|
||||
set_restriction_count=True)
|
||||
if return_matches:
|
||||
return ans
|
||||
self._map_filtered = tuple(ans)
|
||||
|
||||
def search_getting_ids(self, query, search_restriction,
|
||||
set_restriction_count=False):
|
||||
q = ''
|
||||
if not query or not query.strip():
|
||||
q = search_restriction
|
||||
else:
|
||||
q = query
|
||||
if search_restriction:
|
||||
q = u'(%s) and (%s)' % (search_restriction, query)
|
||||
if not q:
|
||||
if set_restriction_count:
|
||||
self.search_restriction_book_count = len(self._map)
|
||||
return list(self._map)
|
||||
matches = self.cache.search(
|
||||
query, search_restriction, virtual_fields={'marked':MarkedVirtualField(self.marked_ids)})
|
||||
rv = [x for x in self._map if x in matches]
|
||||
if set_restriction_count and q == search_restriction:
|
||||
self.search_restriction_book_count = len(rv)
|
||||
return rv
|
||||
|
||||
def set_search_restriction(self, s):
|
||||
self.search_restriction = s
|
||||
|
||||
def search_restriction_applied(self):
|
||||
return bool(self.search_restriction)
|
||||
|
||||
def get_search_restriction_book_count(self):
|
||||
return self.search_restriction_book_count
|
||||
|
||||
def set_marked_ids(self, id_dict):
|
||||
'''
|
||||
ids in id_dict are "marked". They can be searched for by
|
||||
using the search term ``marked:true``. Pass in an empty dictionary or
|
||||
set to clear marked ids.
|
||||
|
||||
:param id_dict: Either a dictionary mapping ids to values or a set
|
||||
of ids. In the latter case, the value is set to 'true' for all ids. If
|
||||
a mapping is provided, then the search can be used to search for
|
||||
particular values: ``marked:value``
|
||||
'''
|
||||
if not hasattr(id_dict, 'items'):
|
||||
# Simple list. Make it a dict of string 'true'
|
||||
self.marked_ids = dict.fromkeys(id_dict, u'true')
|
||||
else:
|
||||
# Ensure that all the items in the dict are text
|
||||
self.marked_ids = dict(izip(id_dict.iterkeys(), imap(unicode,
|
||||
id_dict.itervalues())))
|
||||
|
||||
|
@ -12,7 +12,7 @@ from functools import partial
|
||||
from datetime import datetime
|
||||
|
||||
from calibre.constants import preferred_encoding, ispy3
|
||||
from calibre.ebooks.metadata import author_to_author_sort
|
||||
from calibre.ebooks.metadata import author_to_author_sort, title_sort
|
||||
from calibre.utils.date import (parse_only_date, parse_date, UNDEFINED_DATE,
|
||||
isoformat)
|
||||
from calibre.utils.localization import canonicalize_lang
|
||||
@ -106,6 +106,21 @@ def adapt_languages(to_tuple, x):
|
||||
ans.append(lc)
|
||||
return tuple(ans)
|
||||
|
||||
def clean_identifier(typ, val):
|
||||
typ = icu_lower(typ).strip().replace(':', '').replace(',', '')
|
||||
val = val.strip().replace(',', '|').replace(':', '|')
|
||||
return typ, val
|
||||
|
||||
def adapt_identifiers(to_tuple, x):
|
||||
if not isinstance(x, dict):
|
||||
x = {k:v for k, v in (y.partition(':')[0::2] for y in to_tuple(x))}
|
||||
ans = {}
|
||||
for k, v in x.iteritems():
|
||||
k, v = clean_identifier(k, v)
|
||||
if k and v:
|
||||
ans[k] = v
|
||||
return ans
|
||||
|
||||
def get_adapter(name, metadata):
|
||||
dt = metadata['datatype']
|
||||
if dt == 'text':
|
||||
@ -145,6 +160,8 @@ def get_adapter(name, metadata):
|
||||
return lambda x: 1.0 if ans(x) is None else ans(x)
|
||||
if name == 'languages':
|
||||
return partial(adapt_languages, ans)
|
||||
if name == 'identifiers':
|
||||
return partial(adapt_identifiers, ans)
|
||||
|
||||
return ans
|
||||
# }}}
|
||||
@ -157,6 +174,10 @@ def one_one_in_books(book_id_val_map, db, field, *args):
|
||||
db.conn.executemany(
|
||||
'UPDATE books SET %s=? WHERE id=?'%field.metadata['column'], sequence)
|
||||
field.table.book_col_map.update(book_id_val_map)
|
||||
if field.name == 'title':
|
||||
# Set the title sort field
|
||||
field.title_sort_field.writer.set_books(
|
||||
{k:title_sort(v) for k, v in book_id_val_map.iteritems()}, db)
|
||||
return set(book_id_val_map)
|
||||
|
||||
def one_one_in_other(book_id_val_map, db, field, *args):
|
||||
@ -396,6 +417,31 @@ def many_many(book_id_val_map, db, field, allow_case_change, *args):
|
||||
|
||||
# }}}
|
||||
|
||||
def identifiers(book_id_val_map, db, field, *args): # {{{
|
||||
table = field.table
|
||||
updates = set()
|
||||
for book_id, identifiers in book_id_val_map.iteritems():
|
||||
if book_id not in table.book_col_map:
|
||||
table.book_col_map[book_id] = {}
|
||||
current_ids = table.book_col_map[book_id]
|
||||
remove_keys = set(current_ids) - set(identifiers)
|
||||
for key in remove_keys:
|
||||
table.col_book_map.get(key, set()).discard(book_id)
|
||||
current_ids.pop(key, None)
|
||||
current_ids.update(identifiers)
|
||||
for key, val in identifiers.iteritems():
|
||||
if key not in table.col_book_map:
|
||||
table.col_book_map[key] = set()
|
||||
table.col_book_map[key].add(book_id)
|
||||
updates.add((book_id, key, val))
|
||||
db.conn.executemany('DELETE FROM identifiers WHERE book=?',
|
||||
((x,) for x in book_id_val_map))
|
||||
if updates:
|
||||
db.conn.executemany('INSERT OR REPLACE INTO identifiers (book, type, val) VALUES (?, ?, ?)',
|
||||
tuple(updates))
|
||||
return set(book_id_val_map)
|
||||
# }}}
|
||||
|
||||
def dummy(book_id_val_map, *args):
|
||||
return set()
|
||||
|
||||
@ -412,6 +458,8 @@ class Writer(object):
|
||||
self.set_books_func = dummy
|
||||
elif self.name[0] == '#' and self.name.endswith('_index'):
|
||||
self.set_books_func = custom_series_index
|
||||
elif self.name == 'identifiers':
|
||||
self.set_books_func = identifiers
|
||||
elif field.is_many_many:
|
||||
self.set_books_func = many_many
|
||||
elif field.is_many:
|
||||
|
@ -239,7 +239,7 @@ class ANDROID(USBMS):
|
||||
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
||||
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS',
|
||||
'ICS', 'E400', '__FILE-STOR_GADG', 'ST80208-1']
|
||||
'ICS', 'E400', '__FILE-STOR_GADG', 'ST80208-1', 'GT-S5660M_CARD']
|
||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
||||
|
@ -24,11 +24,11 @@ class PALMPRE(USBMS):
|
||||
FORMATS = ['epub', 'mobi', 'prc', 'pdb', 'txt']
|
||||
|
||||
VENDOR_ID = [0x0830]
|
||||
PRODUCT_ID = [0x8004, 0x8002, 0x0101]
|
||||
PRODUCT_ID = [0x8004, 0x8002, 0x0101, 0x8042]
|
||||
BCD = [0x0316]
|
||||
|
||||
VENDOR_NAME = 'PALM'
|
||||
WINDOWS_MAIN_MEM = 'PRE'
|
||||
WINDOWS_MAIN_MEM = ['PRE', 'PALM_DEVICE']
|
||||
|
||||
EBOOK_DIR_MAIN = 'E-books'
|
||||
|
||||
|
@ -82,6 +82,7 @@ class NOOK(USBMS):
|
||||
return [x.replace('#', '_') for x in components]
|
||||
|
||||
class NOOK_COLOR(NOOK):
|
||||
name = 'Nook Color Device Interface'
|
||||
description = _('Communicate with the Nook Color, TSR and Tablet eBook readers.')
|
||||
|
||||
PRODUCT_ID = [0x002, 0x003, 0x004]
|
||||
|
@ -104,13 +104,11 @@ class PDFOutput(OutputFormatPlugin):
|
||||
'specify a footer template, it will take precedence '
|
||||
'over this option.')),
|
||||
OptionRecommendation(name='pdf_footer_template', recommended_value=None,
|
||||
help=_('An HTML template used to generate footers on every page.'
|
||||
' The string _PAGENUM_ will be replaced by the current page'
|
||||
' number.')),
|
||||
help=_('An HTML template used to generate %s on every page.'
|
||||
' The strings _PAGENUM_, _TITLE_, _AUTHOR_ and _SECTION_ will be replaced by their current values.')%_('footers')),
|
||||
OptionRecommendation(name='pdf_header_template', recommended_value=None,
|
||||
help=_('An HTML template used to generate headers on every page.'
|
||||
' The string _PAGENUM_ will be replaced by the current page'
|
||||
' number.')),
|
||||
help=_('An HTML template used to generate %s on every page.'
|
||||
' The strings _PAGENUM_, _TITLE_, _AUTHOR_ and _SECTION_ will be replaced by their current values.')%_('headers')),
|
||||
])
|
||||
|
||||
def convert(self, oeb_book, output_path, input_plugin, opts, log):
|
||||
|
@ -858,7 +858,7 @@ class Amazon(Source):
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, # {{{
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
log.info('No cached cover found, running identify')
|
||||
|
@ -31,7 +31,7 @@ msprefs.defaults['find_first_edition_date'] = False
|
||||
# Google covers are often poor quality (scans/errors) but they have high
|
||||
# resolution, so they trump covers from better sources. So make sure they
|
||||
# are only used if no other covers are found.
|
||||
msprefs.defaults['cover_priorities'] = {'Google':2}
|
||||
msprefs.defaults['cover_priorities'] = {'Google':2, 'Google Images':2}
|
||||
|
||||
def create_log(ostream=None):
|
||||
from calibre.utils.logging import ThreadSafeLog, FileStream
|
||||
@ -222,6 +222,9 @@ class Source(Plugin):
|
||||
#: plugin
|
||||
config_help_message = None
|
||||
|
||||
#: If True this source can return multiple covers for a given query
|
||||
can_get_multiple_covers = False
|
||||
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
Plugin.__init__(self, *args, **kwargs)
|
||||
@ -522,7 +525,7 @@ class Source(Plugin):
|
||||
return None
|
||||
|
||||
def download_cover(self, log, result_queue, abort,
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
'''
|
||||
Download a cover and put it into result_queue. The parameters all have
|
||||
the same meaning as for :meth:`identify`. Put (self, cover_data) into
|
||||
@ -531,6 +534,9 @@ class Source(Plugin):
|
||||
This method should use cached cover URLs for efficiency whenever
|
||||
possible. When cached data is not present, most plugins simply call
|
||||
identify and use its results.
|
||||
|
||||
If the parameter get_best_cover is True and this plugin can get
|
||||
multiple covers, it should only get the "best" one.
|
||||
'''
|
||||
pass
|
||||
|
||||
|
@ -35,9 +35,14 @@ class Worker(Thread):
|
||||
start_time = time.time()
|
||||
if not self.abort.is_set():
|
||||
try:
|
||||
self.plugin.download_cover(self.log, self.rq, self.abort,
|
||||
title=self.title, authors=self.authors,
|
||||
identifiers=self.identifiers, timeout=self.timeout)
|
||||
if self.plugin.can_get_multiple_covers:
|
||||
self.plugin.download_cover(self.log, self.rq, self.abort,
|
||||
title=self.title, authors=self.authors, get_best_cover=True,
|
||||
identifiers=self.identifiers, timeout=self.timeout)
|
||||
else:
|
||||
self.plugin.download_cover(self.log, self.rq, self.abort,
|
||||
title=self.title, authors=self.authors,
|
||||
identifiers=self.identifiers, timeout=self.timeout)
|
||||
except:
|
||||
self.log.exception('Failed to download cover from',
|
||||
self.plugin.name)
|
||||
|
@ -221,7 +221,7 @@ class Douban(Source):
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, # {{{
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
log.info('No cached cover found, running identify')
|
||||
|
@ -320,7 +320,7 @@ class Edelweiss(Source):
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, # {{{
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
log.info('No cached cover found, running identify')
|
||||
|
@ -209,7 +209,7 @@ class GoogleBooks(Source):
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, # {{{
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
log.info('No cached cover found, running identify')
|
||||
|
148
src/calibre/ebooks/metadata/sources/google_images.py
Normal file
148
src/calibre/ebooks/metadata/sources/google_images.py
Normal file
@ -0,0 +1,148 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from calibre import as_unicode
|
||||
from calibre.ebooks.metadata.sources.base import Source, Option
|
||||
|
||||
class GoogleImages(Source):
|
||||
|
||||
name = 'Google Images'
|
||||
description = _('Downloads covers from a Google Image search. Useful to find larger/alternate covers.')
|
||||
capabilities = frozenset(['cover'])
|
||||
config_help_message = _('Configure the Google Image Search plugin')
|
||||
can_get_multiple_covers = True
|
||||
options = (Option('max_covers', 'number', 5, _('Maximum number of covers to get'),
|
||||
_('The maximum number of covers to process from the google search result')),
|
||||
Option('size', 'choices', 'svga', _('Cover size'),
|
||||
_('Search for covers larger than the specified size'),
|
||||
choices=OrderedDict((
|
||||
('any', _('Any size'),),
|
||||
('l', _('Large'),),
|
||||
('qsvga', _('Larger than %s')%'400x300',),
|
||||
('vga', _('Larger than %s')%'640x480',),
|
||||
('svga', _('Larger than %s')%'600x800',),
|
||||
('xga', _('Larger than %s')%'1024x768',),
|
||||
('2mp', _('Larger than %s')%'2 MP',),
|
||||
('4mp', _('Larger than %s')%'4 MP',),
|
||||
))),
|
||||
)
|
||||
|
||||
def download_cover(self, log, result_queue, abort,
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
if not title:
|
||||
return
|
||||
from threading import Thread
|
||||
import time
|
||||
timeout = max(60, timeout) # Needs at least a minute
|
||||
title = ' '.join(self.get_title_tokens(title))
|
||||
author = ' '.join(self.get_author_tokens(authors))
|
||||
urls = self.get_image_urls(title, author, log, abort, timeout)
|
||||
if not urls:
|
||||
log('No images found in Google for, title: %r and authors: %r'%(title, author))
|
||||
return
|
||||
urls = urls[:self.prefs['max_covers']]
|
||||
if get_best_cover:
|
||||
urls = urls[:1]
|
||||
workers = [Thread(target=self.download_image, args=(url, timeout, log, result_queue)) for url in urls]
|
||||
for w in workers:
|
||||
w.daemon = True
|
||||
w.start()
|
||||
alive = True
|
||||
start_time = time.time()
|
||||
while alive and not abort.is_set() and time.time() - start_time < timeout:
|
||||
alive = False
|
||||
for w in workers:
|
||||
if w.is_alive():
|
||||
alive = True
|
||||
break
|
||||
abort.wait(0.1)
|
||||
|
||||
def download_image(self, url, timeout, log, result_queue):
|
||||
try:
|
||||
ans = self.browser.open_novisit(url, timeout=timeout).read()
|
||||
result_queue.put((self, ans))
|
||||
log('Downloaded cover from: %s'%url)
|
||||
except Exception:
|
||||
self.log.exception('Failed to download cover from: %r'%url)
|
||||
|
||||
def get_image_urls(self, title, author, log, abort, timeout):
|
||||
from calibre.utils.ipc.simple_worker import fork_job, WorkerError
|
||||
try:
|
||||
return fork_job('calibre.ebooks.metadata.sources.google_images',
|
||||
'search', args=(title, author, self.prefs['size'], timeout), no_output=True, abort=abort, timeout=timeout)['result']
|
||||
except WorkerError as e:
|
||||
if e.orig_tb:
|
||||
log.error(e.orig_tb)
|
||||
log.exception('Searching google failed:' + as_unicode(e))
|
||||
except Exception as e:
|
||||
log.exception('Searching google failed:' + as_unicode(e))
|
||||
|
||||
return []
|
||||
|
||||
USER_AGENT = 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101210 Firefox/3.6.13'
|
||||
|
||||
def find_image_urls(br, ans):
|
||||
import urlparse
|
||||
for w in br.page.mainFrame().documentElement().findAll('.images_table a[href]'):
|
||||
try:
|
||||
imgurl = urlparse.parse_qs(urlparse.urlparse(unicode(w.attribute('href'))).query)['imgurl'][0]
|
||||
except:
|
||||
continue
|
||||
if imgurl not in ans:
|
||||
ans.append(imgurl)
|
||||
|
||||
def search(title, author, size, timeout, debug=False):
|
||||
import time
|
||||
from calibre.web.jsbrowser.browser import Browser, LoadWatcher, Timeout
|
||||
ans = []
|
||||
start_time = time.time()
|
||||
br = Browser(user_agent=USER_AGENT, enable_developer_tools=debug)
|
||||
br.visit('https://www.google.com/advanced_image_search')
|
||||
f = br.select_form('form[action="/search"]')
|
||||
f['as_q'] = '%s %s'%(title, author)
|
||||
if size != 'any':
|
||||
f['imgsz'] = size
|
||||
f['imgar'] = 't|xt'
|
||||
f['as_filetype'] = 'jpg'
|
||||
br.submit(wait_for_load=False)
|
||||
|
||||
# Loop until the page finishes loading or at least five image urls are
|
||||
# found
|
||||
lw = LoadWatcher(br.page, br)
|
||||
while lw.is_loading and len(ans) < 5:
|
||||
br.run_for_a_time(0.2)
|
||||
find_image_urls(br, ans)
|
||||
if time.time() - start_time > timeout:
|
||||
raise Timeout('Timed out trying to load google image search page')
|
||||
find_image_urls(br, ans)
|
||||
if debug:
|
||||
br.show_browser()
|
||||
br.close()
|
||||
del br # Needed to prevent PyQt from segfaulting
|
||||
return ans
|
||||
|
||||
def test_google():
|
||||
import pprint
|
||||
pprint.pprint(search('heroes', 'abercrombie', 'svga', 60, debug=True))
|
||||
|
||||
def test():
|
||||
from Queue import Queue
|
||||
from threading import Event
|
||||
from calibre.utils.logging import default_log
|
||||
p = GoogleImages(None)
|
||||
rq = Queue()
|
||||
p.download_cover(default_log, rq, Event(), title='The Heroes',
|
||||
authors=('Joe Abercrombie',))
|
||||
print ('Downloaded', rq.qsize(), 'covers')
|
||||
|
||||
if __name__ == '__main__':
|
||||
test()
|
||||
|
@ -19,7 +19,7 @@ class OpenLibrary(Source):
|
||||
OPENLIBRARY = 'http://covers.openlibrary.org/b/isbn/%s-L.jpg?default=false'
|
||||
|
||||
def download_cover(self, log, result_queue, abort,
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
if 'isbn' not in identifiers:
|
||||
return
|
||||
isbn = identifiers['isbn']
|
||||
|
@ -75,7 +75,7 @@ class OverDrive(Source):
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, # {{{
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
import mechanize
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
|
@ -55,7 +55,7 @@ class Ozon(Source):
|
||||
# for ozon.ru search we have to format ISBN with '-'
|
||||
isbn = _format_isbn(log, identifiers.get('isbn', None))
|
||||
ozonid = identifiers.get('ozon', None)
|
||||
|
||||
|
||||
unk = unicode(_('Unknown')).upper()
|
||||
if (title and title != unk) or (authors and authors != [unk]) or isbn or not ozonid:
|
||||
qItems = set([isbn, title])
|
||||
@ -64,19 +64,19 @@ class Ozon(Source):
|
||||
qItems.discard(None)
|
||||
qItems.discard('')
|
||||
qItems = map(_quoteString, qItems)
|
||||
|
||||
|
||||
q = u' '.join(qItems).strip()
|
||||
log.info(u'search string: ' + q)
|
||||
|
||||
|
||||
if isinstance(q, unicode):
|
||||
q = q.encode('utf-8')
|
||||
if not q:
|
||||
return None
|
||||
|
||||
|
||||
search_url += quote_plus(q)
|
||||
else:
|
||||
search_url = self.ozon_url + '/webservices/OzonWebSvc.asmx/ItemDetail?ID=%s' % ozonid
|
||||
|
||||
|
||||
log.debug(u'search url: %r'%search_url)
|
||||
return search_url
|
||||
# }}}
|
||||
@ -250,7 +250,7 @@ class Ozon(Source):
|
||||
return url
|
||||
# }}}
|
||||
|
||||
def download_cover(self, log, result_queue, abort, title=None, authors=None, identifiers={}, timeout=30): # {{{
|
||||
def download_cover(self, log, result_queue, abort, title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False): # {{{
|
||||
cached_url = self.get_cached_cover_url(identifiers)
|
||||
if cached_url is None:
|
||||
log.debug('No cached cover found, running identify')
|
||||
|
@ -11,6 +11,7 @@ import os
|
||||
from threading import Event, Thread
|
||||
from Queue import Queue, Empty
|
||||
from io import BytesIO
|
||||
from collections import Counter
|
||||
|
||||
from calibre.utils.date import as_utc
|
||||
from calibre.ebooks.metadata.sources.identify import identify, msprefs
|
||||
@ -113,13 +114,18 @@ def single_covers(title, authors, identifiers, caches, tdir):
|
||||
kwargs=dict(title=title, authors=authors, identifiers=identifiers))
|
||||
worker.daemon = True
|
||||
worker.start()
|
||||
c = Counter()
|
||||
while worker.is_alive():
|
||||
try:
|
||||
plugin, width, height, fmt, data = results.get(True, 1)
|
||||
except Empty:
|
||||
continue
|
||||
else:
|
||||
name = '%s,,%s,,%s,,%s.cover'%(plugin.name, width, height, fmt)
|
||||
name = plugin.name
|
||||
if plugin.can_get_multiple_covers:
|
||||
name += '{%d}'%c[plugin.name]
|
||||
c[plugin.name] += 1
|
||||
name = '%s,,%s,,%s,,%s.cover'%(name, width, height, fmt)
|
||||
with open(name, 'wb') as f:
|
||||
f.write(data)
|
||||
os.mkdir(name+'.done')
|
||||
|
@ -33,6 +33,7 @@ class PagedDisplay
|
||||
this.header_template = null
|
||||
this.header = null
|
||||
this.footer = null
|
||||
this.hf_style = null
|
||||
|
||||
read_document_margins: () ->
|
||||
# Read page margins from the document. First checks for an @page rule.
|
||||
@ -184,15 +185,22 @@ class PagedDisplay
|
||||
# log('Time to layout:', new Date().getTime() - start_time)
|
||||
return sm
|
||||
|
||||
create_header_footer: () ->
|
||||
create_header_footer: (uuid) ->
|
||||
if this.header_template != null
|
||||
this.header = document.createElement('div')
|
||||
this.header.setAttribute('style', "overflow:hidden; display:block; position:absolute; left:#{ this.side_margin }px; top: 0px; height: #{ this.margin_top }px; width: #{ this.col_width }px; margin: 0; padding: 0")
|
||||
this.header.setAttribute('id', 'pdf_page_header_'+uuid)
|
||||
document.body.appendChild(this.header)
|
||||
if this.footer_template != null
|
||||
this.footer = document.createElement('div')
|
||||
this.footer.setAttribute('style', "overflow:hidden; display:block; position:absolute; left:#{ this.side_margin }px; top: #{ window.innerHeight - this.margin_bottom }px; height: #{ this.margin_bottom }px; width: #{ this.col_width }px; margin: 0; padding: 0")
|
||||
this.footer.setAttribute('id', 'pdf_page_footer_'+uuid)
|
||||
document.body.appendChild(this.footer)
|
||||
if this.header != null or this.footer != null
|
||||
this.hf_uuid = uuid
|
||||
this.hf_style = document.createElement('style')
|
||||
this.hf_style.setAttribute('type', 'text/css')
|
||||
document.head.appendChild(this.hf_style)
|
||||
this.update_header_footer(1)
|
||||
|
||||
position_header_footer: () ->
|
||||
@ -203,10 +211,16 @@ class PagedDisplay
|
||||
this.footer.style.setProperty('left', left+'px')
|
||||
|
||||
update_header_footer: (pagenum) ->
|
||||
if this.hf_style != null
|
||||
if pagenum%2 == 1 then cls = "even_page" else cls = "odd_page"
|
||||
this.hf_style.innerHTML = "#pdf_page_header_#{ this.hf_uuid } .#{ cls }, #pdf_page_footer_#{ this.hf_uuid } .#{ cls } { display: none }"
|
||||
title = py_bridge.title()
|
||||
author = py_bridge.author()
|
||||
section = py_bridge.section()
|
||||
if this.header != null
|
||||
this.header.innerHTML = this.header_template.replace(/_PAGENUM_/g, pagenum+"")
|
||||
this.header.innerHTML = this.header_template.replace(/_PAGENUM_/g, pagenum+"").replace(/_TITLE_/g, title+"").replace(/_AUTHOR_/g, author+"").replace(/_SECTION_/g, section+"")
|
||||
if this.footer != null
|
||||
this.footer.innerHTML = this.footer_template.replace(/_PAGENUM_/g, pagenum+"")
|
||||
this.footer.innerHTML = this.footer_template.replace(/_PAGENUM_/g, pagenum+"").replace(/_TITLE_/g, title+"").replace(/_AUTHOR_/g, author+"").replace(/_SECTION_/g, section+"")
|
||||
|
||||
fit_images: () ->
|
||||
# Ensure no images are wider than the available width in a column. Note
|
||||
|
@ -7,7 +7,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import sys, traceback
|
||||
import sys, traceback, math
|
||||
from collections import namedtuple
|
||||
from functools import wraps, partial
|
||||
from future_builtins import map
|
||||
@ -355,11 +355,11 @@ class PdfDevice(QPaintDevice): # {{{
|
||||
|
||||
@property
|
||||
def full_page_rect(self):
|
||||
page_width = self.page_width * self.xdpi / 72.0
|
||||
lm = self.left_margin * self.xdpi / 72.0
|
||||
page_height = self.page_height * self.ydpi / 72.0
|
||||
tm = self.top_margin * self.ydpi / 72.0
|
||||
return (-lm, -tm, page_width, page_height)
|
||||
page_width = int(math.ceil(self.page_width * self.xdpi / 72.0))
|
||||
lm = int(math.ceil(self.left_margin * self.xdpi / 72.0))
|
||||
page_height = int(math.ceil(self.page_height * self.ydpi / 72.0))
|
||||
tm = int(math.ceil(self.top_margin * self.ydpi / 72.0))
|
||||
return (-lm, -tm, page_width+1, page_height+1)
|
||||
|
||||
@property
|
||||
def current_page_num(self):
|
||||
|
@ -130,6 +130,18 @@ class PDFWriter(QObject):
|
||||
_pass_json_value = pyqtProperty(QString, fget=_pass_json_value_getter,
|
||||
fset=_pass_json_value_setter)
|
||||
|
||||
@pyqtSlot(result=unicode)
|
||||
def title(self):
|
||||
return self.doc_title
|
||||
|
||||
@pyqtSlot(result=unicode)
|
||||
def author(self):
|
||||
return self.doc_author
|
||||
|
||||
@pyqtSlot(result=unicode)
|
||||
def section(self):
|
||||
return self.current_section
|
||||
|
||||
def __init__(self, opts, log, cover_data=None, toc=None):
|
||||
from calibre.gui2 import is_ok_to_use_qt
|
||||
if not is_ok_to_use_qt():
|
||||
@ -154,6 +166,7 @@ class PDFWriter(QObject):
|
||||
self.view.page().mainFrame().setScrollBarPolicy(x,
|
||||
Qt.ScrollBarAlwaysOff)
|
||||
self.report_progress = lambda x, y: x
|
||||
self.current_section = ''
|
||||
|
||||
def dump(self, items, out_stream, pdf_metadata):
|
||||
opts = self.opts
|
||||
@ -170,9 +183,13 @@ class PDFWriter(QObject):
|
||||
opts.uncompressed_pdf,
|
||||
mark_links=opts.pdf_mark_links)
|
||||
self.footer = opts.pdf_footer_template
|
||||
if self.footer is None and opts.pdf_page_numbers:
|
||||
if self.footer:
|
||||
self.footer = self.footer.strip()
|
||||
if not self.footer and opts.pdf_page_numbers:
|
||||
self.footer = '<p style="text-align:center; text-indent: 0">_PAGENUM_</p>'
|
||||
self.header = opts.pdf_header_template
|
||||
if self.header:
|
||||
self.header = self.header.strip()
|
||||
min_margin = 36
|
||||
if self.footer and opts.margin_bottom < min_margin:
|
||||
self.log.warn('Bottom margin is too small for footer, increasing it.')
|
||||
@ -192,6 +209,8 @@ class PDFWriter(QObject):
|
||||
self.doc.set_metadata(title=pdf_metadata.title,
|
||||
author=pdf_metadata.author,
|
||||
tags=pdf_metadata.tags)
|
||||
self.doc_title = pdf_metadata.title
|
||||
self.doc_author = pdf_metadata.author
|
||||
self.painter.save()
|
||||
try:
|
||||
if self.cover_data is not None:
|
||||
@ -273,13 +292,34 @@ class PDFWriter(QObject):
|
||||
self.loop.processEvents(self.loop.ExcludeUserInputEvents)
|
||||
evaljs('document.getElementById("MathJax_Message").style.display="none";')
|
||||
|
||||
def get_sections(self, anchor_map):
|
||||
sections = {}
|
||||
ci = os.path.abspath(os.path.normcase(self.current_item))
|
||||
if self.toc is not None:
|
||||
for toc in self.toc.flat():
|
||||
path = toc.abspath or None
|
||||
frag = toc.fragment or None
|
||||
if path is None:
|
||||
continue
|
||||
path = os.path.abspath(os.path.normcase(path))
|
||||
if path == ci:
|
||||
col = 0
|
||||
if frag and frag in anchor_map:
|
||||
col = anchor_map[frag]['column']
|
||||
if col not in sections:
|
||||
sections[col] = toc.text or _('Untitled')
|
||||
|
||||
return sections
|
||||
|
||||
def do_paged_render(self):
|
||||
if self.paged_js is None:
|
||||
import uuid
|
||||
from calibre.utils.resources import compiled_coffeescript as cc
|
||||
self.paged_js = cc('ebooks.oeb.display.utils')
|
||||
self.paged_js += cc('ebooks.oeb.display.indexing')
|
||||
self.paged_js += cc('ebooks.oeb.display.paged')
|
||||
self.paged_js += cc('ebooks.oeb.display.mathjax')
|
||||
self.hf_uuid = str(uuid.uuid4()).replace('-', '')
|
||||
|
||||
self.view.page().mainFrame().addToJavaScriptWindowObject("py_bridge", self)
|
||||
self.view.page().longjs_counter = 0
|
||||
@ -302,6 +342,12 @@ class PDFWriter(QObject):
|
||||
py_bridge.value = book_indexing.all_links_and_anchors();
|
||||
'''%(self.margin_top, 0, self.margin_bottom))
|
||||
|
||||
amap = self.bridge_value
|
||||
if not isinstance(amap, dict):
|
||||
amap = {'links':[], 'anchors':{}} # Some javascript error occurred
|
||||
sections = self.get_sections(amap['anchors'])
|
||||
col = 0
|
||||
|
||||
if self.header:
|
||||
self.bridge_value = self.header
|
||||
evaljs('paged_display.header_template = py_bridge.value')
|
||||
@ -309,15 +355,14 @@ class PDFWriter(QObject):
|
||||
self.bridge_value = self.footer
|
||||
evaljs('paged_display.footer_template = py_bridge.value')
|
||||
if self.header or self.footer:
|
||||
evaljs('paged_display.create_header_footer();')
|
||||
evaljs('paged_display.create_header_footer("%s");'%self.hf_uuid)
|
||||
|
||||
amap = self.bridge_value
|
||||
if not isinstance(amap, dict):
|
||||
amap = {'links':[], 'anchors':{}} # Some javascript error occurred
|
||||
start_page = self.current_page_num
|
||||
|
||||
mf = self.view.page().mainFrame()
|
||||
while True:
|
||||
if col in sections:
|
||||
self.current_section = sections[col]
|
||||
self.doc.init_page()
|
||||
if self.header or self.footer:
|
||||
evaljs('paged_display.update_header_footer(%d)'%self.current_page_num)
|
||||
@ -331,8 +376,10 @@ class PDFWriter(QObject):
|
||||
evaljs('window.scrollTo(%d, 0); paged_display.position_header_footer();'%nsl[0])
|
||||
if self.doc.errors_occurred:
|
||||
break
|
||||
col += 1
|
||||
|
||||
if not self.doc.errors_occurred:
|
||||
self.doc.add_links(self.current_item, start_page, amap['links'],
|
||||
amap['anchors'])
|
||||
|
||||
|
||||
|
@ -347,9 +347,9 @@ class DeleteAction(InterfaceAction):
|
||||
self.remove_matching_books_from_device()
|
||||
# The following will run if the selected books are not on a connected device.
|
||||
# The user has selected to delete from the library or the device and library.
|
||||
if not confirm('<p>'+_('The selected books will be '
|
||||
if not confirm('<p>'+_('The %d selected book(s) will be '
|
||||
'<b>permanently deleted</b> and the files '
|
||||
'removed from your calibre library. Are you sure?')
|
||||
'removed from your calibre library. Are you sure?')%len(to_delete_ids)
|
||||
+'</p>', 'library_delete_books', self.gui):
|
||||
return
|
||||
next_id = view.next_id
|
||||
@ -382,9 +382,9 @@ class DeleteAction(InterfaceAction):
|
||||
view = self.gui.card_b_view
|
||||
paths = view.model().paths(rows)
|
||||
ids = view.model().indices(rows)
|
||||
if not confirm('<p>'+_('The selected books will be '
|
||||
if not confirm('<p>'+_('The %d selected book(s) will be '
|
||||
'<b>permanently deleted</b> '
|
||||
'from your device. Are you sure?')
|
||||
'from your device. Are you sure?')%len(paths)
|
||||
+'</p>', 'device_delete_books', self.gui):
|
||||
return
|
||||
job = self.gui.remove_paths(paths)
|
||||
|
@ -15,7 +15,8 @@ from PyQt4.Qt import (QFileSystemWatcher, QObject, Qt, pyqtSignal, QTimer)
|
||||
from calibre import prints
|
||||
from calibre.ptempfile import PersistentTemporaryDirectory
|
||||
from calibre.ebooks import BOOK_EXTENSIONS
|
||||
from calibre.gui2 import question_dialog, gprefs
|
||||
from calibre.gui2 import gprefs
|
||||
from calibre.gui2.dialogs.duplicates import DuplicatesQuestion
|
||||
|
||||
AUTO_ADDED = frozenset(BOOK_EXTENSIONS) - {'pdr', 'mbp', 'tan'}
|
||||
|
||||
@ -218,17 +219,20 @@ class AutoAdder(QObject):
|
||||
paths.extend(p)
|
||||
formats.extend(f)
|
||||
metadata.extend(mis)
|
||||
files = [_('%(title)s by %(author)s')%dict(title=mi.title,
|
||||
author=mi.format_field('authors')[1]) for mi in metadata]
|
||||
if question_dialog(self.parent(), _('Duplicates found!'),
|
||||
_('Books with the same title as the following already '
|
||||
'exist in the database. Add them anyway?'),
|
||||
'\n'.join(files)):
|
||||
dups, ids = m.add_books(paths, formats, metadata,
|
||||
add_duplicates=True, return_ids=True)
|
||||
added_ids |= set(ids)
|
||||
num = len(ids)
|
||||
count += num
|
||||
dups = [(mi, mi.cover, [p]) for mi, p in zip(metadata, paths)]
|
||||
d = DuplicatesQuestion(m.db, dups, parent=gui)
|
||||
dups = tuple(d.duplicates)
|
||||
if dups:
|
||||
paths, formats, metadata = [], [], []
|
||||
for mi, cover, book_paths in dups:
|
||||
paths.extend(book_paths)
|
||||
formats.extend([p.rpartition('.')[-1] for p in book_paths])
|
||||
metadata.extend([mi for i in book_paths])
|
||||
ids = m.add_books(paths, formats, metadata,
|
||||
add_duplicates=True, return_ids=True)[1]
|
||||
added_ids |= set(ids)
|
||||
num = len(ids)
|
||||
count += num
|
||||
|
||||
for tdir in data.itervalues():
|
||||
try:
|
||||
|
@ -22,7 +22,9 @@ class PluginWidget(Widget, Ui_Form):
|
||||
'override_profile_size', 'paper_size', 'custom_size',
|
||||
'preserve_cover_aspect_ratio', 'pdf_serif_family', 'unit',
|
||||
'pdf_sans_family', 'pdf_mono_family', 'pdf_standard_font',
|
||||
'pdf_default_font_size', 'pdf_mono_font_size', 'pdf_page_numbers'])
|
||||
'pdf_default_font_size', 'pdf_mono_font_size', 'pdf_page_numbers',
|
||||
'pdf_footer_template', 'pdf_header_template',
|
||||
])
|
||||
self.db, self.book_id = db, book_id
|
||||
|
||||
for x in get_option('paper_size').option.choices:
|
||||
|
@ -6,8 +6,8 @@
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>590</width>
|
||||
<height>395</height>
|
||||
<width>638</width>
|
||||
<height>498</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="windowTitle">
|
||||
@ -84,6 +84,13 @@
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="5" column="0" colspan="2">
|
||||
<widget class="QCheckBox" name="opt_pdf_page_numbers">
|
||||
<property name="text">
|
||||
<string>Add page &numbers to the bottom of every page</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="6" column="0">
|
||||
<widget class="QLabel" name="label_4">
|
||||
<property name="text">
|
||||
@ -170,24 +177,52 @@
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="12" column="0">
|
||||
<spacer name="verticalSpacer">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Vertical</enum>
|
||||
</property>
|
||||
<property name="sizeHint" stdset="0">
|
||||
<size>
|
||||
<width>20</width>
|
||||
<height>213</height>
|
||||
</size>
|
||||
</property>
|
||||
</spacer>
|
||||
</item>
|
||||
<item row="5" column="0" colspan="2">
|
||||
<widget class="QCheckBox" name="opt_pdf_page_numbers">
|
||||
<property name="text">
|
||||
<string>Add page &numbers to the bottom of every page</string>
|
||||
<item row="12" column="0" colspan="2">
|
||||
<widget class="QGroupBox" name="groupBox">
|
||||
<property name="title">
|
||||
<string>Page headers and footers</string>
|
||||
</property>
|
||||
<layout class="QFormLayout" name="formLayout_2">
|
||||
<item row="0" column="0" colspan="2">
|
||||
<widget class="QLabel" name="label_2">
|
||||
<property name="text">
|
||||
<string>You can insert headers and footers into every page of the produced PDF file by using header and footer templates. For examples, see the <a href="http://manual.calibre-ebook.com/conversion.html#converting-to-pdf">documentation</a>.</string>
|
||||
</property>
|
||||
<property name="wordWrap">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
<property name="openExternalLinks">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="1" column="0">
|
||||
<widget class="QLabel" name="label_12">
|
||||
<property name="text">
|
||||
<string>&Header template:</string>
|
||||
</property>
|
||||
<property name="buddy">
|
||||
<cstring>opt_pdf_header_template</cstring>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="1" column="1">
|
||||
<widget class="QLineEdit" name="opt_pdf_header_template"/>
|
||||
</item>
|
||||
<item row="2" column="0">
|
||||
<widget class="QLabel" name="label_13">
|
||||
<property name="text">
|
||||
<string>&Footer template:</string>
|
||||
</property>
|
||||
<property name="buddy">
|
||||
<cstring>opt_pdf_footer_template</cstring>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="2" column="1">
|
||||
<widget class="QLineEdit" name="opt_pdf_footer_template"/>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
|
@ -9,7 +9,7 @@ import functools, re, os, traceback, errno, time
|
||||
from collections import defaultdict
|
||||
|
||||
from PyQt4.Qt import (QAbstractTableModel, Qt, pyqtSignal, QIcon, QImage,
|
||||
QModelIndex, QVariant, QDateTime, QColor)
|
||||
QModelIndex, QVariant, QDateTime, QColor, QPixmap)
|
||||
|
||||
from calibre.gui2 import NONE, UNDEFINED_QDATETIME, error_dialog
|
||||
from calibre.utils.pyparsing import ParseException
|
||||
@ -94,7 +94,14 @@ class ColumnIcon(object):
|
||||
return icon_bitmap
|
||||
d = os.path.join(config_dir, 'cc_icons', icon)
|
||||
if (os.path.exists(d)):
|
||||
icon_bitmap = QIcon(d)
|
||||
icon_bitmap = QPixmap(d)
|
||||
h = icon_bitmap.height()
|
||||
w = icon_bitmap.width()
|
||||
# If the image is landscape and width is more than 50%
|
||||
# large than height, use the pixmap. This tells Qt to display
|
||||
# the image full width. It might be clipped to row height.
|
||||
if w < (3 * h)/2:
|
||||
icon_bitmap = QIcon(icon_bitmap)
|
||||
icon_cache[id_][dex] = icon_bitmap
|
||||
icon_bitmap_cache[icon] = icon_bitmap
|
||||
self.mi = None
|
||||
|
@ -16,13 +16,12 @@ from operator import attrgetter
|
||||
from Queue import Queue, Empty
|
||||
from io import BytesIO
|
||||
|
||||
from PyQt4.Qt import (QStyledItemDelegate, QTextDocument, QRectF, QIcon, Qt,
|
||||
QApplication, QDialog, QVBoxLayout, QLabel,
|
||||
QDialogButtonBox, QStyle, QStackedWidget, QWidget,
|
||||
QTableView, QGridLayout, QFontInfo, QPalette, QTimer,
|
||||
pyqtSignal, QAbstractTableModel, QVariant, QSize,
|
||||
QListView, QPixmap, QAbstractListModel, QColor, QRect,
|
||||
QTextBrowser, QStringListModel)
|
||||
from PyQt4.Qt import (
|
||||
QStyledItemDelegate, QTextDocument, QRectF, QIcon, Qt, QApplication,
|
||||
QDialog, QVBoxLayout, QLabel, QDialogButtonBox, QStyle, QStackedWidget,
|
||||
QWidget, QTableView, QGridLayout, QFontInfo, QPalette, QTimer, pyqtSignal,
|
||||
QAbstractTableModel, QVariant, QSize, QListView, QPixmap, QModelIndex,
|
||||
QAbstractListModel, QColor, QRect, QTextBrowser, QStringListModel)
|
||||
from PyQt4.QtWebKit import QWebView
|
||||
|
||||
from calibre.customize.ui import metadata_plugins
|
||||
@ -654,7 +653,7 @@ class CoversModel(QAbstractListModel): # {{{
|
||||
for i, plugin in enumerate(metadata_plugins(['cover'])):
|
||||
self.covers.append((plugin.name+'\n'+_('Searching...'),
|
||||
QVariant(self.blank), None, True))
|
||||
self.plugin_map[plugin] = i+1
|
||||
self.plugin_map[plugin] = [i+1]
|
||||
|
||||
if do_reset:
|
||||
self.reset()
|
||||
@ -685,48 +684,82 @@ class CoversModel(QAbstractListModel): # {{{
|
||||
def plugin_for_index(self, index):
|
||||
row = index.row() if hasattr(index, 'row') else index
|
||||
for k, v in self.plugin_map.iteritems():
|
||||
if v == row:
|
||||
if row in v:
|
||||
return k
|
||||
|
||||
def cover_keygen(self, x):
|
||||
pmap = x[2]
|
||||
if pmap is None:
|
||||
return 1
|
||||
return pmap.width()*pmap.height()
|
||||
|
||||
def clear_failed(self):
|
||||
# Remove entries that are still waiting
|
||||
good = []
|
||||
pmap = {}
|
||||
dcovers = sorted(self.covers[1:], key=self.cover_keygen, reverse=True)
|
||||
cmap = {x:self.covers.index(x) for x in self.covers}
|
||||
def keygen(x):
|
||||
pmap = x[2]
|
||||
if pmap is None:
|
||||
return 1
|
||||
return pmap.width()*pmap.height()
|
||||
dcovers = sorted(self.covers[1:], key=keygen, reverse=True)
|
||||
cmap = {i:self.plugin_for_index(i) for i in xrange(len(self.covers))}
|
||||
for i, x in enumerate(self.covers[0:1] + dcovers):
|
||||
if not x[-1]:
|
||||
good.append(x)
|
||||
if i > 0:
|
||||
plugin = self.plugin_for_index(cmap[x])
|
||||
pmap[plugin] = len(good) - 1
|
||||
plugin = cmap[i]
|
||||
if plugin is not None:
|
||||
try:
|
||||
pmap[plugin].append(len(good) - 1)
|
||||
except KeyError:
|
||||
pmap[plugin] = [len(good)-1]
|
||||
self.covers = good
|
||||
self.plugin_map = pmap
|
||||
self.reset()
|
||||
|
||||
def index_for_plugin(self, plugin):
|
||||
idx = self.plugin_map.get(plugin, 0)
|
||||
return self.index(idx)
|
||||
def pointer_from_index(self, index):
|
||||
row = index.row() if hasattr(index, 'row') else index
|
||||
try:
|
||||
return self.covers[row][2]
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
def index_from_pointer(self, pointer):
|
||||
for r, (text, scaled, pmap, waiting) in enumerate(self.covers):
|
||||
if pointer == pmap:
|
||||
return self.index(r)
|
||||
return self.index(0)
|
||||
|
||||
def update_result(self, plugin_name, width, height, data):
|
||||
idx = None
|
||||
for plugin, i in self.plugin_map.iteritems():
|
||||
if plugin.name == plugin_name:
|
||||
idx = i
|
||||
break
|
||||
if idx is None:
|
||||
return
|
||||
pmap = QPixmap()
|
||||
pmap.loadFromData(data)
|
||||
if pmap.isNull():
|
||||
return
|
||||
self.covers[idx] = self.get_item(plugin_name, pmap, waiting=False)
|
||||
self.dataChanged.emit(self.index(idx), self.index(idx))
|
||||
if plugin_name.endswith('}'):
|
||||
# multi cover plugin
|
||||
plugin_name = plugin_name.partition('{')[0]
|
||||
plugin = [plugin for plugin in self.plugin_map if plugin.name == plugin_name]
|
||||
if not plugin:
|
||||
return
|
||||
plugin = plugin[0]
|
||||
last_row = max(self.plugin_map[plugin])
|
||||
pmap = QPixmap()
|
||||
pmap.loadFromData(data)
|
||||
if pmap.isNull():
|
||||
return
|
||||
self.beginInsertRows(QModelIndex(), last_row, last_row)
|
||||
for rows in self.plugin_map.itervalues():
|
||||
for i in xrange(len(rows)):
|
||||
if rows[i] >= last_row:
|
||||
rows[i] += 1
|
||||
self.plugin_map[plugin].insert(-1, last_row)
|
||||
self.covers.insert(last_row, self.get_item(plugin_name, pmap, waiting=False))
|
||||
self.endInsertRows()
|
||||
else:
|
||||
# single cover plugin
|
||||
idx = None
|
||||
for plugin, rows in self.plugin_map.iteritems():
|
||||
if plugin.name == plugin_name:
|
||||
idx = rows[0]
|
||||
break
|
||||
if idx is None:
|
||||
return
|
||||
pmap = QPixmap()
|
||||
pmap.loadFromData(data)
|
||||
if pmap.isNull():
|
||||
return
|
||||
self.covers[idx] = self.get_item(plugin_name, pmap, waiting=False)
|
||||
self.dataChanged.emit(self.index(idx), self.index(idx))
|
||||
|
||||
def cover_pixmap(self, index):
|
||||
row = index.row()
|
||||
@ -774,9 +807,12 @@ class CoversView(QListView): # {{{
|
||||
self.m.reset_covers()
|
||||
|
||||
def clear_failed(self):
|
||||
plugin = self.m.plugin_for_index(self.currentIndex())
|
||||
pointer = self.m.pointer_from_index(self.currentIndex())
|
||||
self.m.clear_failed()
|
||||
self.select(self.m.index_for_plugin(plugin).row())
|
||||
if pointer is None:
|
||||
self.select(0)
|
||||
else:
|
||||
self.select(self.m.index_from_pointer(pointer).row())
|
||||
|
||||
# }}}
|
||||
|
||||
@ -852,10 +888,11 @@ class CoversWidget(QWidget): # {{{
|
||||
if num < 2:
|
||||
txt = _('Could not find any covers for <b>%s</b>')%self.book.title
|
||||
else:
|
||||
txt = _('Found <b>%(num)d</b> covers of %(title)s. '
|
||||
'Pick the one you like best.')%dict(num=num-1,
|
||||
txt = _('Found <b>%(num)d</b> possible covers for %(title)s. '
|
||||
'When the download completes, the covers will be sorted by size.')%dict(num=num-1,
|
||||
title=self.title)
|
||||
self.msg.setText(txt)
|
||||
self.msg.setWordWrap(True)
|
||||
|
||||
self.finished.emit()
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -18,13 +18,26 @@ from calibre import browser
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
|
||||
class AmazonDEKindleStore(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'charhale0a-21'}
|
||||
store_link = ('http://www.amazon.de/gp/redirect.html?ie=UTF8&site-redirect=de'
|
||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=19454'
|
||||
'&location=http://www.amazon.de/ebooks-kindle/b?node=530886031')
|
||||
store_link_details = ('http://www.amazon.de/gp/redirect.html?ie=UTF8'
|
||||
'&location=http://www.amazon.de/dp/%(asin)s&site-redirect=de'
|
||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=6742')
|
||||
search_url = 'http://www.amazon.de/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
# This class is copy/pasted from amason_uk_plugin. Do not modify it in any
|
||||
# other amazon EU plugin. Be sure to paste it into all other amazon EU plugins
|
||||
# when modified.
|
||||
author_article = 'von '
|
||||
|
||||
and_word = ' und '
|
||||
|
||||
# ---- Copy from here to end
|
||||
|
||||
class AmazonEUBase(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
@ -46,12 +59,18 @@ class AmazonEUBase(StorePlugin):
|
||||
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
format_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()')
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
price_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and contains(@class, "bld")]/text()')
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
@ -102,20 +121,3 @@ class AmazonEUBase(StorePlugin):
|
||||
def get_details(self, search_result, timeout):
|
||||
pass
|
||||
|
||||
class AmazonDEKindleStore(AmazonEUBase):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'charhale0a-21'}
|
||||
store_link = ('http://www.amazon.de/gp/redirect.html?ie=UTF8&site-redirect=de'
|
||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=19454'
|
||||
'&location=http://www.amazon.de/ebooks-kindle/b?node=530886031')
|
||||
store_link_details = ('http://www.amazon.de/gp/redirect.html?ie=UTF8'
|
||||
'&location=http://www.amazon.de/dp/%(asin)s&site-redirect=de'
|
||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=6742')
|
||||
search_url = 'http://www.amazon.de/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'von '
|
||||
|
||||
and_word = ' und '
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -18,12 +18,25 @@ from calibre import browser
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
|
||||
class AmazonESKindleStore(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
# This class is copy/pasted from amason_uk_plugin. Do not modify it in any
|
||||
# other amazon EU plugin. Be sure to paste it into all other amazon EU plugins
|
||||
# when modified.
|
||||
aff_id = {'tag': 'charhale09-21'}
|
||||
store_link = ('http://www.amazon.es/ebooks-kindle/b?_encoding=UTF8&'
|
||||
'node=827231031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3626&creative=24790')
|
||||
store_link_details = ('http://www.amazon.es/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.es/dp/%(asin)s&tag=%(tag)s'
|
||||
'&linkCode=ur2&camp=3626&creative=24790')
|
||||
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'de '
|
||||
|
||||
and_word = ' y '
|
||||
|
||||
# ---- Copy from here to end
|
||||
|
||||
class AmazonEUBase(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
@ -45,12 +58,18 @@ class AmazonEUBase(StorePlugin):
|
||||
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
format_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()')
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
price_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and contains(@class, "bld")]/text()')
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
@ -101,19 +120,3 @@ class AmazonEUBase(StorePlugin):
|
||||
def get_details(self, search_result, timeout):
|
||||
pass
|
||||
|
||||
class AmazonESKindleStore(AmazonEUBase):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'charhale09-21'}
|
||||
store_link = ('http://www.amazon.es/ebooks-kindle/b?_encoding=UTF8&'
|
||||
'node=827231031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3626&creative=24790')
|
||||
store_link_details = ('http://www.amazon.es/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.es/dp/%(asin)s&tag=%(tag)s'
|
||||
'&linkCode=ur2&camp=3626&creative=24790')
|
||||
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'de '
|
||||
|
||||
and_word = ' y '
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -18,13 +18,22 @@ from calibre import browser
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
|
||||
class AmazonFRKindleStore(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'charhale-21'}
|
||||
store_link = 'http://www.amazon.fr/livres-kindle/b?ie=UTF8&node=695398031&ref_=sa_menu_kbo1&_encoding=UTF8&tag=%(tag)s&linkCode=ur2&camp=1642&creative=19458' % aff_id
|
||||
store_link_details = 'http://www.amazon.fr/gp/redirect.html?ie=UTF8&location=http://www.amazon.fr/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=1634&creative=6738'
|
||||
search_url = 'http://www.amazon.fr/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
# This class is copy/pasted from amason_uk_plugin. Do not modify it in any
|
||||
# other amazon EU plugin. Be sure to paste it into all other amazon EU plugins
|
||||
# when modified.
|
||||
author_article = 'de '
|
||||
|
||||
and_word = ' et '
|
||||
|
||||
# ---- Copy from here to end
|
||||
|
||||
class AmazonEUBase(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
@ -46,12 +55,18 @@ class AmazonEUBase(StorePlugin):
|
||||
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
format_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()')
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
price_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and contains(@class, "bld")]/text()')
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
@ -102,16 +117,3 @@ class AmazonEUBase(StorePlugin):
|
||||
def get_details(self, search_result, timeout):
|
||||
pass
|
||||
|
||||
class AmazonFRKindleStore(AmazonEUBase):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'charhale-21'}
|
||||
store_link = 'http://www.amazon.fr/livres-kindle/b?ie=UTF8&node=695398031&ref_=sa_menu_kbo1&_encoding=UTF8&tag=%(tag)s&linkCode=ur2&camp=1642&creative=19458' % aff_id
|
||||
store_link_details = 'http://www.amazon.fr/gp/redirect.html?ie=UTF8&location=http://www.amazon.fr/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=1634&creative=6738'
|
||||
search_url = 'http://www.amazon.fr/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'de '
|
||||
|
||||
and_word = ' et '
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -18,12 +18,25 @@ from calibre import browser
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
|
||||
class AmazonITKindleStore(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
# This class is copy/pasted from amason_uk_plugin. Do not modify it in any
|
||||
# other amazon EU plugin. Be sure to paste it into all other amazon EU plugins
|
||||
# when modified.
|
||||
aff_id = {'tag': 'httpcharles07-21'}
|
||||
store_link = ('http://www.amazon.it/ebooks-kindle/b?_encoding=UTF8&'
|
||||
'node=827182031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3370&creative=23322')
|
||||
store_link_details = ('http://www.amazon.it/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.it/dp/%(asin)s&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=3370&creative=23322')
|
||||
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'di '
|
||||
|
||||
and_word = ' e '
|
||||
|
||||
# ---- Copy from here to end
|
||||
|
||||
class AmazonEUBase(StorePlugin):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
@ -45,12 +58,18 @@ class AmazonEUBase(StorePlugin):
|
||||
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
format_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()')
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
price_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and contains(@class, "bld")]/text()')
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
@ -100,20 +119,3 @@ class AmazonEUBase(StorePlugin):
|
||||
|
||||
def get_details(self, search_result, timeout):
|
||||
pass
|
||||
|
||||
class AmazonITKindleStore(AmazonEUBase):
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
|
||||
aff_id = {'tag': 'httpcharles07-21'}
|
||||
store_link = ('http://www.amazon.it/ebooks-kindle/b?_encoding=UTF8&'
|
||||
'node=827182031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3370&creative=23322')
|
||||
store_link_details = ('http://www.amazon.it/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.it/dp/%(asin)s&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=3370&creative=23322')
|
||||
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'di '
|
||||
|
||||
and_word = ' e '
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -129,12 +129,12 @@ class AmazonKindleStore(StorePlugin):
|
||||
doc = html.fromstring(f.read().decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
format_xpath = './/ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -19,11 +19,28 @@ from calibre.gui2.store import StorePlugin
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
|
||||
|
||||
# This class is copy/pasted from amason_uk_plugin. Do not modify it in any
|
||||
# other amazon EU plugin. Be sure to paste it into all other amazon EU plugins
|
||||
# when modified.
|
||||
|
||||
class AmazonEUBase(StorePlugin):
|
||||
class AmazonUKKindleStore(StorePlugin):
|
||||
aff_id = {'tag': 'calcharles-21'}
|
||||
store_link = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.co.uk/Kindle-eBooks/b?'
|
||||
'ie=UTF8&node=341689031&ref_=sa_menu_kbo2&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=1634&creative=19450')
|
||||
store_link_details = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.co.uk/dp/%(asin)s&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=1634&creative=6738')
|
||||
search_url = 'http://www.amazon.co.uk/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'by '
|
||||
|
||||
and_word = ' and '
|
||||
|
||||
# This code is copy/pasted from from here to the other amazon EU. Do not
|
||||
# modify it in any other amazon EU plugin. Be sure to paste it into all
|
||||
# other amazon EU plugins when modified.
|
||||
|
||||
# ---- Copy from here to end
|
||||
|
||||
'''
|
||||
For comments on the implementation, please see amazon_plugin.py
|
||||
'''
|
||||
@ -45,12 +62,18 @@ class AmazonEUBase(StorePlugin):
|
||||
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||
|
||||
data_xpath = '//div[contains(@class, "prod")]'
|
||||
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
format_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()')
|
||||
asin_xpath = '@name'
|
||||
cover_xpath = './/img[@class="productImage"]/@src'
|
||||
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]//text()'
|
||||
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||
# Results can be in a grid (table) or a column
|
||||
price_xpath = (
|
||||
'.//ul[contains(@class, "rsltL") or contains(@class, "rsltGridList")]'
|
||||
'//span[contains(@class, "lrg") and contains(@class, "bld")]/text()')
|
||||
|
||||
for data in doc.xpath(data_xpath):
|
||||
if counter <= 0:
|
||||
@ -101,18 +124,3 @@ class AmazonEUBase(StorePlugin):
|
||||
def get_details(self, search_result, timeout):
|
||||
pass
|
||||
|
||||
class AmazonUKKindleStore(AmazonEUBase):
|
||||
aff_id = {'tag': 'calcharles-21'}
|
||||
store_link = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.co.uk/Kindle-eBooks/b?'
|
||||
'ie=UTF8&node=341689031&ref_=sa_menu_kbo2&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=1634&creative=19450')
|
||||
store_link_details = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||
'location=http://www.amazon.co.uk/dp/%(asin)s&tag=%(tag)s&'
|
||||
'linkCode=ur2&camp=1634&creative=6738')
|
||||
search_url = 'http://www.amazon.co.uk/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||
|
||||
author_article = 'by '
|
||||
|
||||
and_word = ' and '
|
||||
|
||||
|
@ -1,101 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import urllib2
|
||||
from contextlib import closing
|
||||
|
||||
from lxml import html
|
||||
|
||||
from PyQt4.Qt import QUrl
|
||||
|
||||
from calibre import browser, url_slash_cleaner
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store import StorePlugin
|
||||
from calibre.gui2.store.basic_config import BasicStoreConfig
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||
|
||||
class BeWriteStore(BasicStoreConfig, StorePlugin):
|
||||
|
||||
def open(self, parent=None, detail_item=None, external=False):
|
||||
url = 'http://www.bewrite.net/mm5/merchant.mvc?Screen=SFNT'
|
||||
|
||||
if external or self.config.get('open_external', False):
|
||||
open_url(QUrl(url_slash_cleaner(detail_item if detail_item else url)))
|
||||
else:
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_item)
|
||||
d.setWindowTitle(self.name)
|
||||
d.set_tags(self.config.get('tags', ''))
|
||||
d.exec_()
|
||||
|
||||
def search(self, query, max_results=10, timeout=60):
|
||||
url = 'http://www.bewrite.net/mm5/merchant.mvc?Search_Code=B&Screen=SRCH&Search=' + urllib2.quote(query)
|
||||
|
||||
br = browser()
|
||||
|
||||
counter = max_results
|
||||
with closing(br.open(url, timeout=timeout)) as f:
|
||||
doc = html.fromstring(f.read())
|
||||
for data in doc.xpath('//div[@id="content"]//table/tr[position() > 1]'):
|
||||
if counter <= 0:
|
||||
break
|
||||
|
||||
id = ''.join(data.xpath('.//a/@href'))
|
||||
if not id:
|
||||
continue
|
||||
|
||||
heading = ''.join(data.xpath('./td[2]//text()'))
|
||||
title, q, author = heading.partition('by ')
|
||||
cover_url = ''
|
||||
price = ''
|
||||
|
||||
counter -= 1
|
||||
|
||||
s = SearchResult()
|
||||
s.cover_url = cover_url.strip()
|
||||
s.title = title.strip()
|
||||
s.author = author.strip()
|
||||
s.price = price.strip()
|
||||
s.detail_item = id.strip()
|
||||
s.drm = SearchResult.DRM_UNLOCKED
|
||||
|
||||
yield s
|
||||
|
||||
def get_details(self, search_result, timeout):
|
||||
br = browser()
|
||||
|
||||
with closing(br.open(search_result.detail_item, timeout=timeout)) as nf:
|
||||
idata = html.fromstring(nf.read())
|
||||
|
||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "ePub")]/text()'))
|
||||
if not price:
|
||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "MOBI")]/text()'))
|
||||
if not price:
|
||||
price = ''.join(idata.xpath('//div[@id="content"]//td[contains(text(), "PDF")]/text()'))
|
||||
price = '$' + price.split('$')[-1]
|
||||
search_result.price = price.strip()
|
||||
|
||||
cover_img = idata.xpath('//div[@id="content"]//img/@src')
|
||||
if cover_img:
|
||||
for i in cover_img:
|
||||
if '00001' in i:
|
||||
cover_url = 'http://www.bewrite.net/mm5/' + i
|
||||
search_result.cover_url = cover_url.strip()
|
||||
break
|
||||
|
||||
formats = set([])
|
||||
if idata.xpath('boolean(//div[@id="content"]//td[contains(text(), "ePub")])'):
|
||||
formats.add('EPUB')
|
||||
if idata.xpath('boolean(//div[@id="content"]//td[contains(text(), "PDF")])'):
|
||||
formats.add('PDF')
|
||||
if idata.xpath('boolean(//div[@id="content"]//td[contains(text(), "MOBI")])'):
|
||||
formats.add('MOBI')
|
||||
search_result.formats = ', '.join(list(formats))
|
||||
|
||||
return True
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -44,7 +44,7 @@ class BNStore(BasicStoreConfig, StorePlugin):
|
||||
with closing(br.open(url, timeout=timeout)) as f:
|
||||
raw = f.read()
|
||||
doc = html.fromstring(raw)
|
||||
for data in doc.xpath('//ul[contains(@class, "result-set")]/li[contains(@class, "result")]'):
|
||||
for data in doc.xpath('//ol[contains(@class, "result-set")]/li[contains(@class, "result")]'):
|
||||
if counter <= 0:
|
||||
break
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
@ -15,7 +15,7 @@ from lxml import html
|
||||
|
||||
from PyQt4.Qt import QUrl
|
||||
|
||||
from calibre import browser, url_slash_cleaner
|
||||
from calibre import browser, random_user_agent, url_slash_cleaner
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store import StorePlugin
|
||||
from calibre.gui2.store.basic_config import BasicStoreConfig
|
||||
@ -41,7 +41,7 @@ class GutenbergStore(BasicStoreConfig, StorePlugin):
|
||||
def search(self, query, max_results=10, timeout=60):
|
||||
url = 'http://m.gutenberg.org/ebooks/search.mobile/?default_prefix=all&sort_order=title&query=' + urllib.quote_plus(query)
|
||||
|
||||
br = browser()
|
||||
br = browser(user_agent=random_user_agent())
|
||||
|
||||
counter = max_results
|
||||
with closing(br.open(url, timeout=timeout)) as f:
|
||||
@ -72,7 +72,7 @@ class GutenbergStore(BasicStoreConfig, StorePlugin):
|
||||
def get_details(self, search_result, timeout):
|
||||
url = url_slash_cleaner('http://m.gutenberg.org/' + search_result.detail_item)
|
||||
|
||||
br = browser()
|
||||
br = browser(user_agent=random_user_agent())
|
||||
with closing(br.open(url, timeout=timeout)) as nf:
|
||||
doc = html.fromstring(nf.read())
|
||||
|
||||
|
@ -14,7 +14,7 @@ from functools import partial
|
||||
from PyQt4.Qt import (QPushButton, QFrame, QVariant, QMenu, QInputDialog,
|
||||
QDialog, QVBoxLayout, QDialogButtonBox, QSize, QStackedWidget, QWidget,
|
||||
QLabel, Qt, pyqtSignal, QIcon, QTreeWidget, QGridLayout, QTreeWidgetItem,
|
||||
QToolButton, QItemSelectionModel)
|
||||
QToolButton, QItemSelectionModel, QCursor)
|
||||
|
||||
from calibre.ebooks.oeb.polish.container import get_container, AZW3Container
|
||||
from calibre.ebooks.oeb.polish.toc import (
|
||||
@ -190,7 +190,7 @@ class ItemView(QFrame): # {{{
|
||||
)))
|
||||
l.addWidget(b)
|
||||
|
||||
self.fal = b = QPushButton(_('Flatten the ToC'))
|
||||
self.fal = b = QPushButton(_('&Flatten the ToC'))
|
||||
b.clicked.connect(self.flatten_toc)
|
||||
b.setToolTip(textwrap.fill(_(
|
||||
'Flatten the Table of Contents, putting all entries at the top level'
|
||||
@ -339,6 +339,185 @@ class ItemView(QFrame): # {{{
|
||||
|
||||
# }}}
|
||||
|
||||
class TreeWidget(QTreeWidget): # {{{
|
||||
|
||||
def __init__(self, parent):
|
||||
QTreeWidget.__init__(self, parent)
|
||||
self.setHeaderLabel(_('Table of Contents'))
|
||||
self.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
self.setDragEnabled(True)
|
||||
self.setSelectionMode(self.ExtendedSelection)
|
||||
self.viewport().setAcceptDrops(True)
|
||||
self.setDropIndicatorShown(True)
|
||||
self.setDragDropMode(self.InternalMove)
|
||||
self.setAutoScroll(True)
|
||||
self.setAutoScrollMargin(ICON_SIZE*2)
|
||||
self.setDefaultDropAction(Qt.MoveAction)
|
||||
self.setAutoExpandDelay(1000)
|
||||
self.setAnimated(True)
|
||||
self.setMouseTracking(True)
|
||||
self.in_drop_event = False
|
||||
self.root = self.invisibleRootItem()
|
||||
self.setContextMenuPolicy(Qt.CustomContextMenu)
|
||||
self.customContextMenuRequested.connect(self.show_context_menu)
|
||||
|
||||
def iteritems(self, parent=None):
|
||||
if parent is None:
|
||||
parent = self.invisibleRootItem()
|
||||
for i in xrange(parent.childCount()):
|
||||
child = parent.child(i)
|
||||
yield child
|
||||
for gc in self.iteritems(parent=child):
|
||||
yield gc
|
||||
|
||||
def dropEvent(self, event):
|
||||
self.in_drop_event = True
|
||||
try:
|
||||
super(TreeWidget, self).dropEvent(event)
|
||||
finally:
|
||||
self.in_drop_event = False
|
||||
|
||||
def selectedIndexes(self):
|
||||
ans = super(TreeWidget, self).selectedIndexes()
|
||||
if self.in_drop_event:
|
||||
# For order to be be preserved when moving by drag and drop, we
|
||||
# have to ensure that selectedIndexes returns an ordered list of
|
||||
# indexes.
|
||||
sort_map = {self.indexFromItem(item):i for i, item in enumerate(self.iteritems())}
|
||||
ans = sorted(ans, key=lambda x:sort_map.get(x, -1), reverse=True)
|
||||
return ans
|
||||
|
||||
def highlight_item(self, item):
|
||||
self.setCurrentItem(item, 0, QItemSelectionModel.ClearAndSelect)
|
||||
self.scrollToItem(item)
|
||||
|
||||
def move_left(self):
|
||||
item = self.currentItem()
|
||||
if item is not None:
|
||||
parent = item.parent()
|
||||
if parent is not None:
|
||||
is_expanded = item.isExpanded() or item.childCount() == 0
|
||||
gp = parent.parent() or self.invisibleRootItem()
|
||||
idx = gp.indexOfChild(parent)
|
||||
for gc in [parent.child(i) for i in xrange(parent.indexOfChild(item)+1, parent.childCount())]:
|
||||
parent.removeChild(gc)
|
||||
item.addChild(gc)
|
||||
parent.removeChild(item)
|
||||
gp.insertChild(idx+1, item)
|
||||
if is_expanded:
|
||||
self.expandItem(item)
|
||||
self.highlight_item(item)
|
||||
|
||||
def move_right(self):
|
||||
item = self.currentItem()
|
||||
if item is not None:
|
||||
parent = item.parent() or self.invisibleRootItem()
|
||||
idx = parent.indexOfChild(item)
|
||||
if idx > 0:
|
||||
is_expanded = item.isExpanded()
|
||||
np = parent.child(idx-1)
|
||||
parent.removeChild(item)
|
||||
np.addChild(item)
|
||||
if is_expanded:
|
||||
self.expandItem(item)
|
||||
self.highlight_item(item)
|
||||
|
||||
def move_down(self):
|
||||
item = self.currentItem()
|
||||
if item is None:
|
||||
if self.root.childCount() == 0:
|
||||
return
|
||||
item = self.root.child(0)
|
||||
self.highlight_item(item)
|
||||
return
|
||||
parent = item.parent() or self.root
|
||||
idx = parent.indexOfChild(item)
|
||||
if idx == parent.childCount() - 1:
|
||||
# At end of parent, need to become sibling of parent
|
||||
if parent is self.root:
|
||||
return
|
||||
gp = parent.parent() or self.root
|
||||
parent.removeChild(item)
|
||||
gp.insertChild(gp.indexOfChild(parent)+1, item)
|
||||
else:
|
||||
sibling = parent.child(idx+1)
|
||||
parent.removeChild(item)
|
||||
sibling.insertChild(0, item)
|
||||
self.highlight_item(item)
|
||||
|
||||
def move_up(self):
|
||||
item = self.currentItem()
|
||||
if item is None:
|
||||
if self.root.childCount() == 0:
|
||||
return
|
||||
item = self.root.child(self.root.childCount()-1)
|
||||
self.highlight_item(item)
|
||||
return
|
||||
parent = item.parent() or self.root
|
||||
idx = parent.indexOfChild(item)
|
||||
if idx == 0:
|
||||
# At end of parent, need to become sibling of parent
|
||||
if parent is self.root:
|
||||
return
|
||||
gp = parent.parent() or self.root
|
||||
parent.removeChild(item)
|
||||
gp.insertChild(gp.indexOfChild(parent), item)
|
||||
else:
|
||||
sibling = parent.child(idx-1)
|
||||
parent.removeChild(item)
|
||||
sibling.addChild(item)
|
||||
self.highlight_item(item)
|
||||
|
||||
def del_items(self):
|
||||
for item in self.selectedItems():
|
||||
p = item.parent() or self.root
|
||||
p.removeChild(item)
|
||||
|
||||
def title_case(self):
|
||||
from calibre.utils.titlecase import titlecase
|
||||
for item in self.selectedItems():
|
||||
t = unicode(item.data(0, Qt.DisplayRole).toString())
|
||||
item.setData(0, Qt.DisplayRole, titlecase(t))
|
||||
|
||||
def keyPressEvent(self, ev):
|
||||
if ev.key() == Qt.Key_Left and ev.modifiers() & Qt.CTRL:
|
||||
self.move_left()
|
||||
ev.accept()
|
||||
elif ev.key() == Qt.Key_Right and ev.modifiers() & Qt.CTRL:
|
||||
self.move_right()
|
||||
ev.accept()
|
||||
elif ev.key() == Qt.Key_Up and ev.modifiers() & Qt.CTRL:
|
||||
self.move_up()
|
||||
ev.accept()
|
||||
elif ev.key() == Qt.Key_Down and ev.modifiers() & Qt.CTRL:
|
||||
self.move_down()
|
||||
ev.accept()
|
||||
elif ev.key() in (Qt.Key_Delete, Qt.Key_Backspace):
|
||||
self.del_items()
|
||||
ev.accept()
|
||||
else:
|
||||
return super(TreeWidget, self).keyPressEvent(ev)
|
||||
|
||||
def show_context_menu(self, point):
|
||||
item = self.currentItem()
|
||||
if item is not None:
|
||||
m = QMenu()
|
||||
ci = unicode(item.data(0, Qt.DisplayRole).toString())
|
||||
p = item.parent() or self.invisibleRootItem()
|
||||
idx = p.indexOfChild(item)
|
||||
if idx > 0:
|
||||
m.addAction(QIcon(I('arrow-up.png')), _('Move "%s" up')%ci, self.move_up)
|
||||
if idx + 1 < p.childCount():
|
||||
m.addAction(QIcon(I('arrow-down.png')), _('Move "%s" down')%ci, self.move_down)
|
||||
m.addAction(QIcon(I('trash.png')), _('Remove all selected items'), self.del_items)
|
||||
if item.parent() is not None:
|
||||
m.addAction(QIcon(I('back.png')), _('Unindent "%s"')%ci, self.move_left)
|
||||
if idx > 0:
|
||||
m.addAction(QIcon(I('forward.png')), _('Indent "%s"')%ci, self.move_right)
|
||||
m.addAction(_('Change all selected items to title case'), self.title_case)
|
||||
m.exec_(QCursor.pos())
|
||||
# }}}
|
||||
|
||||
class TOCView(QWidget): # {{{
|
||||
|
||||
add_new_item = pyqtSignal(object, object)
|
||||
@ -347,41 +526,44 @@ class TOCView(QWidget): # {{{
|
||||
QWidget.__init__(self, parent)
|
||||
l = self.l = QGridLayout()
|
||||
self.setLayout(l)
|
||||
self.tocw = t = QTreeWidget(self)
|
||||
t.setHeaderLabel(_('Table of Contents'))
|
||||
t.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
t.setDragEnabled(True)
|
||||
t.setSelectionMode(t.ExtendedSelection)
|
||||
t.viewport().setAcceptDrops(True)
|
||||
t.setDropIndicatorShown(True)
|
||||
t.setDragDropMode(t.InternalMove)
|
||||
t.setAutoScroll(True)
|
||||
t.setAutoScrollMargin(ICON_SIZE*2)
|
||||
t.setDefaultDropAction(Qt.MoveAction)
|
||||
t.setAutoExpandDelay(1000)
|
||||
t.setAnimated(True)
|
||||
t.setMouseTracking(True)
|
||||
l.addWidget(t, 0, 0, 5, 3)
|
||||
self.tocw = t = TreeWidget(self)
|
||||
l.addWidget(t, 0, 0, 7, 3)
|
||||
self.up_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('arrow-up.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 0, 3)
|
||||
b.setToolTip(_('Move current entry up'))
|
||||
b.setToolTip(_('Move current entry up [Ctrl+Up]'))
|
||||
b.clicked.connect(self.move_up)
|
||||
|
||||
self.left_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('back.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 2, 3)
|
||||
b.setToolTip(_('Unindent the current entry [Ctrl+Left]'))
|
||||
b.clicked.connect(self.tocw.move_left)
|
||||
|
||||
self.del_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('trash.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 2, 3)
|
||||
l.addWidget(b, 3, 3)
|
||||
b.setToolTip(_('Remove all selected entries'))
|
||||
b.clicked.connect(self.del_items)
|
||||
|
||||
self.left_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('forward.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 4, 3)
|
||||
b.setToolTip(_('Unindent the current entry [Ctrl+Left]'))
|
||||
b.clicked.connect(self.tocw.move_right)
|
||||
|
||||
self.down_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('arrow-down.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 4, 3)
|
||||
b.setToolTip(_('Move current entry down'))
|
||||
l.addWidget(b, 6, 3)
|
||||
b.setToolTip(_('Move current entry down [Ctrl+Down]'))
|
||||
b.clicked.connect(self.move_down)
|
||||
self.expand_all_button = b = QPushButton(_('&Expand all'))
|
||||
col = 5
|
||||
col = 7
|
||||
l.addWidget(b, col, 0)
|
||||
b.clicked.connect(self.tocw.expandAll)
|
||||
self.collapse_all_button = b = QPushButton(_('&Collapse all'))
|
||||
@ -412,9 +594,7 @@ class TOCView(QWidget): # {{{
|
||||
return unicode(item.data(0, Qt.DisplayRole).toString())
|
||||
|
||||
def del_items(self):
|
||||
for item in self.tocw.selectedItems():
|
||||
p = item.parent() or self.root
|
||||
p.removeChild(item)
|
||||
self.tocw.del_items()
|
||||
|
||||
def delete_current_item(self):
|
||||
item = self.tocw.currentItem()
|
||||
@ -423,13 +603,8 @@ class TOCView(QWidget): # {{{
|
||||
p.removeChild(item)
|
||||
|
||||
def iteritems(self, parent=None):
|
||||
if parent is None:
|
||||
parent = self.root
|
||||
for i in xrange(parent.childCount()):
|
||||
child = parent.child(i)
|
||||
yield child
|
||||
for gc in self.iteritems(parent=child):
|
||||
yield gc
|
||||
for item in self.tocw.iteritems(parent=parent):
|
||||
yield item
|
||||
|
||||
def flatten_toc(self):
|
||||
found = True
|
||||
@ -457,54 +632,13 @@ class TOCView(QWidget): # {{{
|
||||
self.tocw.setCurrentItem(None)
|
||||
|
||||
def highlight_item(self, item):
|
||||
self.tocw.setCurrentItem(item, 0, QItemSelectionModel.ClearAndSelect)
|
||||
self.tocw.scrollToItem(item)
|
||||
|
||||
def move_down(self):
|
||||
item = self.tocw.currentItem()
|
||||
if item is None:
|
||||
if self.root.childCount() == 0:
|
||||
return
|
||||
item = self.root.child(0)
|
||||
self.highlight_item(item)
|
||||
return
|
||||
parent = item.parent() or self.root
|
||||
idx = parent.indexOfChild(item)
|
||||
if idx == parent.childCount() - 1:
|
||||
# At end of parent, need to become sibling of parent
|
||||
if parent is self.root:
|
||||
return
|
||||
gp = parent.parent() or self.root
|
||||
parent.removeChild(item)
|
||||
gp.insertChild(gp.indexOfChild(parent)+1, item)
|
||||
else:
|
||||
sibling = parent.child(idx+1)
|
||||
parent.removeChild(item)
|
||||
sibling.insertChild(0, item)
|
||||
self.highlight_item(item)
|
||||
self.tocw.highlight_item(item)
|
||||
|
||||
def move_up(self):
|
||||
item = self.tocw.currentItem()
|
||||
if item is None:
|
||||
if self.root.childCount() == 0:
|
||||
return
|
||||
item = self.root.child(self.root.childCount()-1)
|
||||
self.highlight_item(item)
|
||||
return
|
||||
parent = item.parent() or self.root
|
||||
idx = parent.indexOfChild(item)
|
||||
if idx == 0:
|
||||
# At end of parent, need to become sibling of parent
|
||||
if parent is self.root:
|
||||
return
|
||||
gp = parent.parent() or self.root
|
||||
parent.removeChild(item)
|
||||
gp.insertChild(gp.indexOfChild(parent), item)
|
||||
else:
|
||||
sibling = parent.child(idx-1)
|
||||
parent.removeChild(item)
|
||||
sibling.addChild(item)
|
||||
self.highlight_item(item)
|
||||
self.tocw.move_up()
|
||||
|
||||
def move_down(self):
|
||||
self.tocw.move_down()
|
||||
|
||||
def update_status_tip(self, item):
|
||||
c = item.data(0, Qt.UserRole).toPyObject()
|
||||
|
@ -592,6 +592,9 @@ def command_set_metadata(args, dbpath):
|
||||
print >>sys.stderr, _('You must specify either a field or an opf file')
|
||||
return 1
|
||||
book_id = int(args[1])
|
||||
if book_id not in db.all_ids():
|
||||
prints(_('No book with id: %s in the database')%book_id, file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
|
||||
if len(args) > 2:
|
||||
opf = args[2]
|
||||
@ -870,6 +873,9 @@ def parse_series_string(db, label, value):
|
||||
return val, s_index
|
||||
|
||||
def do_set_custom(db, col, id_, val, append):
|
||||
if id_ not in db.all_ids():
|
||||
prints(_('No book with id: %s in the database')%id_, file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
if db.custom_column_label_map[col]['datatype'] == 'series':
|
||||
val, s_index = parse_series_string(db, col, val)
|
||||
db.set_custom(id_, val, extra=s_index, label=col, append=append)
|
||||
@ -941,11 +947,16 @@ def command_custom_columns(args, dbpath):
|
||||
|
||||
def do_remove_custom_column(db, label, force):
|
||||
if not force:
|
||||
q = raw_input(_('You will lose all data in the column: %r.'
|
||||
q = raw_input(_('You will lose all data in the column: %s.'
|
||||
' Are you sure (y/n)? ')%label)
|
||||
if q.lower().strip() != _('y'):
|
||||
return
|
||||
db.delete_custom_column(label=label)
|
||||
try:
|
||||
db.delete_custom_column(label=label)
|
||||
except KeyError:
|
||||
prints(_('No column named %s found. You must use column labels, not titles.'
|
||||
' Use calibredb custom_columns to get a list of labels.')%label, file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
prints('Column %r removed.'%label)
|
||||
|
||||
def remove_custom_column_option_parser():
|
||||
|
@ -1343,23 +1343,39 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
if not isinstance(dest, basestring):
|
||||
raise Exception("Error, you must pass the dest as a path when"
|
||||
" using windows_atomic_move")
|
||||
if dest and not samefile(dest, path):
|
||||
windows_atomic_move.copy_path_to(path, dest)
|
||||
if dest:
|
||||
if samefile(path, dest):
|
||||
# Ensure that the file has the same case as dest
|
||||
try:
|
||||
if path != dest:
|
||||
os.rename(path, dest)
|
||||
except:
|
||||
pass # Nothing too catastrophic happened, the cases mismatch, that's all
|
||||
else:
|
||||
windows_atomic_move.copy_path_to(path, dest)
|
||||
else:
|
||||
if hasattr(dest, 'write'):
|
||||
with lopen(path, 'rb') as f:
|
||||
shutil.copyfileobj(f, dest)
|
||||
if hasattr(dest, 'flush'):
|
||||
dest.flush()
|
||||
elif dest and not samefile(dest, path):
|
||||
if use_hardlink:
|
||||
try:
|
||||
hardlink_file(path, dest)
|
||||
return
|
||||
except:
|
||||
pass
|
||||
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
|
||||
shutil.copyfileobj(f, d)
|
||||
elif dest:
|
||||
if samefile(dest, path):
|
||||
if not self.is_case_sensitive and path != dest:
|
||||
# Ensure that the file has the same case as dest
|
||||
try:
|
||||
os.rename(path, dest)
|
||||
except:
|
||||
pass # Nothing too catastrophic happened, the cases mismatch, that's all
|
||||
else:
|
||||
if use_hardlink:
|
||||
try:
|
||||
hardlink_file(path, dest)
|
||||
return
|
||||
except:
|
||||
pass
|
||||
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
|
||||
shutil.copyfileobj(f, d)
|
||||
|
||||
def copy_cover_to(self, index, dest, index_is_id=False,
|
||||
windows_atomic_move=None, use_hardlink=False):
|
||||
|
@ -347,7 +347,7 @@ class ZshCompleter(object): # {{{
|
||||
subcommands.append(';;')
|
||||
|
||||
f.write('\n_calibredb() {')
|
||||
f.write(
|
||||
f.write((
|
||||
r'''
|
||||
local state line state_descr context
|
||||
typeset -A opt_args
|
||||
@ -370,7 +370,7 @@ class ZshCompleter(object): # {{{
|
||||
esac
|
||||
|
||||
return ret
|
||||
'''%'\n '.join(subcommands))
|
||||
'''%'\n '.join(subcommands)).encode('utf-8'))
|
||||
f.write('\n}\n\n')
|
||||
|
||||
def write(self):
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user