mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-07 10:14:46 -04:00
0.9.6
This commit is contained in:
commit
a3ee82fe3d
@ -35,4 +35,7 @@ nbproject/
|
|||||||
.settings/
|
.settings/
|
||||||
*.DS_Store
|
*.DS_Store
|
||||||
calibre_plugins/
|
calibre_plugins/
|
||||||
./src/calibre/gui2/catalog/catalog_csv_xml.ui.autosave
|
recipes/.git
|
||||||
|
recipes/.gitignore
|
||||||
|
recipes/README
|
||||||
|
recipes/katalog_egazeciarz.recipe
|
||||||
|
@ -47,12 +47,6 @@ License: Apache 2.0
|
|||||||
The full text of the Apache 2.0 license is available at:
|
The full text of the Apache 2.0 license is available at:
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
Files: src/sfntly/*
|
|
||||||
Copyright: Google Inc.
|
|
||||||
License: Apache 2.0
|
|
||||||
The full text of the Apache 2.0 license is available at:
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Files: resources/viewer/mathjax/*
|
Files: resources/viewer/mathjax/*
|
||||||
Copyright: Unknown
|
Copyright: Unknown
|
||||||
License: Apache 2.0
|
License: Apache 2.0
|
||||||
|
@ -19,6 +19,61 @@
|
|||||||
# new recipes:
|
# new recipes:
|
||||||
# - title:
|
# - title:
|
||||||
|
|
||||||
|
- version: 0.9.6
|
||||||
|
date: 2012-11-10
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "Experimental support for subsetting fonts"
|
||||||
|
description: "Subsetting a font means reducing the font to contain only the glyphs for the text actually present in the book. This can easily halve the size of the font. calibre can now do this for all embedded fonts during a conversion. Turn it on via the 'Subset all embedded fonts' option under the Look & Feel section of the conversion dialog. calibre can subset both TrueType and OpenType fonts. Note that this code is very new and likely has bugs, so please check the output if you turn on subsetting. The conversion log will have info about the subsetting operations."
|
||||||
|
type: major
|
||||||
|
|
||||||
|
- title: "EPUB Input: Try to workaround EPUBs that have missing or damaged ZIP central directories. calibre should now be able to read/convert such an EPUB file, provided it does not suffer from further corruption."
|
||||||
|
|
||||||
|
- title: "Allow using identifiers in save to disk templates."
|
||||||
|
tickets: [1074623]
|
||||||
|
|
||||||
|
- title: "calibredb: Add an option to not notify the GUI"
|
||||||
|
|
||||||
|
- title: "Catalogs: Fix long tags causing catalog generation to fail on windows. Add the ability to cross-reference authors, i.e. to relist the authors for a book with multiple authors separately."
|
||||||
|
tickets: [1074931]
|
||||||
|
|
||||||
|
- title: "Edit metadata dialog: Add a clear tags button to remove all tags with a single click"
|
||||||
|
|
||||||
|
- title: "Add search to the font family chooser dialog"
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "Windows: Fix a long standing bug in the device eject code that for some reason only manifested in 0.9.5."
|
||||||
|
tickets: [1075782]
|
||||||
|
|
||||||
|
- title: "Get Books: Fix Amazon stores, Google Books store and libri.de"
|
||||||
|
|
||||||
|
- title: "Kobo driver: More fixes for on device book matching, and list books as being on device even if the Kobo has not yet indexed them. Also some performance improvements."
|
||||||
|
tickets: [1069617]
|
||||||
|
|
||||||
|
- title: "EPUB Output: Remove duplicate id and name attributes to eliminate pointless noise from the various epub check utilities"
|
||||||
|
|
||||||
|
- title: "Ask for confirmation before removing plugins"
|
||||||
|
|
||||||
|
- title: "Fix bulk convert queueing dialog becoming very long if any of the books have a very long title."
|
||||||
|
tickets: [1076191]
|
||||||
|
|
||||||
|
- title: "Fix deleting custom column tags like data from the Tag browser not updating the last modified timestamp for affected books"
|
||||||
|
tickets: [1075476]
|
||||||
|
|
||||||
|
- title: "When updating a previously broken plugin, do not show an error message because the previous version of the plugin cannot be loaded"
|
||||||
|
|
||||||
|
- title: "Fix regression that broke the Template Editor"
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- Various updated Polish recipes
|
||||||
|
- London Review of Books
|
||||||
|
- Yemen Times
|
||||||
|
|
||||||
|
new recipes:
|
||||||
|
- title: "Various Polish news sources"
|
||||||
|
author: Artur Stachecki
|
||||||
|
|
||||||
|
|
||||||
- version: 0.9.5
|
- version: 0.9.5
|
||||||
date: 2012-11-02
|
date: 2012-11-02
|
||||||
|
|
||||||
|
@ -327,9 +327,8 @@ You can browse your |app| collection on your Android device is by using the
|
|||||||
calibre content server, which makes your collection available over the net.
|
calibre content server, which makes your collection available over the net.
|
||||||
First perform the following steps in |app|
|
First perform the following steps in |app|
|
||||||
|
|
||||||
* Set the :guilabel:`Preferred Output Format` in |app| to EPUB (The output format can be set under :guilabel:`Preferences->Interface->Behavior`)
|
* Set the :guilabel:`Preferred Output Format` in |app| to EPUB for normal Android devices or MOBI for Kindles (The output format can be set under :guilabel:`Preferences->Interface->Behavior`)
|
||||||
* Set the output profile to Tablet (this will work for phones as well), under :guilabel:`Preferences->Conversion->Common Options->Page Setup`
|
* Convert the books you want to read on your device to EPUB/MOBI format by selecting them and clicking the Convert button.
|
||||||
* Convert the books you want to read on your device to EPUB format by selecting them and clicking the Convert button.
|
|
||||||
* Turn on the Content Server in |app|'s preferences and leave |app| running.
|
* Turn on the Content Server in |app|'s preferences and leave |app| running.
|
||||||
|
|
||||||
Now on your Android device, open the browser and browse to
|
Now on your Android device, open the browser and browse to
|
||||||
@ -722,8 +721,8 @@ You can switch |app| to using a backed up library folder by simply clicking the
|
|||||||
|
|
||||||
If you want to backup the |app| configuration/plugins, you have to backup the config directory. You can find this config directory via :guilabel:`Preferences->Miscellaneous`. Note that restoring configuration directories is not officially supported, but should work in most cases. Just copy the contents of the backup directory into the current configuration directory to restore.
|
If you want to backup the |app| configuration/plugins, you have to backup the config directory. You can find this config directory via :guilabel:`Preferences->Miscellaneous`. Note that restoring configuration directories is not officially supported, but should work in most cases. Just copy the contents of the backup directory into the current configuration directory to restore.
|
||||||
|
|
||||||
How do I use purchased EPUB books with |app|?
|
How do I use purchased EPUB books with |app| (or what do I do with .acsm files)?
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Most purchased EPUB books have `DRM <http://drmfree.calibre-ebook.com/about#drm>`_. This prevents |app| from opening them. You can still use |app| to store and transfer them to your ebook reader. First, you must authorize your reader on a windows machine with Adobe Digital Editions. Once this is done, EPUB books transferred with |app| will work fine on your reader. When you purchase an epub book from a website, you will get an ".acsm" file. This file should be opened with Adobe Digital Editions, which will then download the actual ".epub" ebook. The ebook file will be stored in the folder "My Digital Editions", from where you can add it to |app|.
|
Most purchased EPUB books have `DRM <http://drmfree.calibre-ebook.com/about#drm>`_. This prevents |app| from opening them. You can still use |app| to store and transfer them to your ebook reader. First, you must authorize your reader on a windows machine with Adobe Digital Editions. Once this is done, EPUB books transferred with |app| will work fine on your reader. When you purchase an epub book from a website, you will get an ".acsm" file. This file should be opened with Adobe Digital Editions, which will then download the actual ".epub" ebook. The ebook file will be stored in the folder "My Digital Editions", from where you can add it to |app|.
|
||||||
|
|
||||||
I am getting a "Permission Denied" error?
|
I am getting a "Permission Denied" error?
|
||||||
|
@ -2,7 +2,9 @@ import re
|
|||||||
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
|
||||||
class FocusRecipe(BasicNewsRecipe):
|
class FocusRecipe(BasicNewsRecipe):
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__author__ = u'intromatyk <intromatyk@gmail.com>'
|
__author__ = u'intromatyk <intromatyk@gmail.com>'
|
||||||
language = 'pl'
|
language = 'pl'
|
||||||
@ -12,10 +14,10 @@ class FocusRecipe(BasicNewsRecipe):
|
|||||||
publisher = u'Gruner + Jahr Polska'
|
publisher = u'Gruner + Jahr Polska'
|
||||||
category = u'News'
|
category = u'News'
|
||||||
description = u'Newspaper'
|
description = u'Newspaper'
|
||||||
category='magazine'
|
category = 'magazine'
|
||||||
cover_url=''
|
cover_url = ''
|
||||||
remove_empty_feeds= True
|
remove_empty_feeds = True
|
||||||
no_stylesheets=True
|
no_stylesheets = True
|
||||||
oldest_article = 7
|
oldest_article = 7
|
||||||
max_articles_per_feed = 100000
|
max_articles_per_feed = 100000
|
||||||
recursions = 0
|
recursions = 0
|
||||||
@ -27,15 +29,15 @@ class FocusRecipe(BasicNewsRecipe):
|
|||||||
simultaneous_downloads = 5
|
simultaneous_downloads = 5
|
||||||
|
|
||||||
r = re.compile('.*(?P<url>http:\/\/(www.focus.pl)|(rss.feedsportal.com\/c)\/.*\.html?).*')
|
r = re.compile('.*(?P<url>http:\/\/(www.focus.pl)|(rss.feedsportal.com\/c)\/.*\.html?).*')
|
||||||
keep_only_tags =[]
|
keep_only_tags = []
|
||||||
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'cll'}))
|
keep_only_tags.append(dict(name='div', attrs={'id': 'cll'}))
|
||||||
|
|
||||||
remove_tags =[]
|
remove_tags = []
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'ulm noprint'}))
|
remove_tags.append(dict(name='div', attrs={'class': 'ulm noprint'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'txb'}))
|
remove_tags.append(dict(name='div', attrs={'class': 'txb'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'h2'}))
|
remove_tags.append(dict(name='div', attrs={'class': 'h2'}))
|
||||||
remove_tags.append(dict(name = 'ul', attrs = {'class' : 'txu'}))
|
remove_tags.append(dict(name='ul', attrs={'class': 'txu'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'ulc'}))
|
remove_tags.append(dict(name='div', attrs={'class': 'ulc'}))
|
||||||
|
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
body {font-family: verdana, arial, helvetica, geneva, sans-serif ;}
|
body {font-family: verdana, arial, helvetica, geneva, sans-serif ;}
|
||||||
@ -44,18 +46,17 @@ class FocusRecipe(BasicNewsRecipe):
|
|||||||
p.lead {font-weight: bold; text-align: left;}
|
p.lead {font-weight: bold; text-align: left;}
|
||||||
.authordate {font-size: small; color: #696969;}
|
.authordate {font-size: small; color: #696969;}
|
||||||
.fot{font-size: x-small; color: #666666;}
|
.fot{font-size: x-small; color: #666666;}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
feeds = [
|
||||||
feeds = [
|
('Nauka', 'http://www.focus.pl/nauka/rss/'),
|
||||||
('Nauka', 'http://focus.pl.feedsportal.com/c/32992/f/532693/index.rss'),
|
('Historia', 'http://www.focus.pl/historia/rss/'),
|
||||||
('Historia', 'http://focus.pl.feedsportal.com/c/32992/f/532694/index.rss'),
|
('Cywilizacja', 'http://www.focus.pl/cywilizacja/rss/'),
|
||||||
('Cywilizacja', 'http://focus.pl.feedsportal.com/c/32992/f/532695/index.rss'),
|
('Sport', 'http://www.focus.pl/sport/rss/'),
|
||||||
('Sport', 'http://focus.pl.feedsportal.com/c/32992/f/532696/index.rss'),
|
('Technika', 'http://www.focus.pl/technika/rss/'),
|
||||||
('Technika', 'http://focus.pl.feedsportal.com/c/32992/f/532697/index.rss'),
|
('Przyroda', 'http://www.focus.pl/przyroda/rss/'),
|
||||||
('Przyroda', 'http://focus.pl.feedsportal.com/c/32992/f/532698/index.rss'),
|
('Technologie', 'http://www.focus.pl/gadzety/rss/')
|
||||||
('Technologie', 'http://focus.pl.feedsportal.com/c/32992/f/532699/index.rss'),
|
]
|
||||||
]
|
|
||||||
|
|
||||||
def skip_ad_pages(self, soup):
|
def skip_ad_pages(self, soup):
|
||||||
if ('advertisement' in soup.find('title').string.lower()):
|
if ('advertisement' in soup.find('title').string.lower()):
|
||||||
@ -65,20 +66,20 @@ class FocusRecipe(BasicNewsRecipe):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def get_cover_url(self):
|
def get_cover_url(self):
|
||||||
soup=self.index_to_soup('http://www.focus.pl/magazyn/')
|
soup = self.index_to_soup('http://www.focus.pl/magazyn/')
|
||||||
tag=soup.find(name='div', attrs={'class':'clr fl'})
|
tag = soup.find(name='div', attrs={'class': 'clr fl'})
|
||||||
if tag:
|
if tag:
|
||||||
self.cover_url='http://www.focus.pl/' + tag.a['href']
|
self.cover_url = 'http://www.focus.pl/' + tag.a['href']
|
||||||
return getattr(self, 'cover_url', self.cover_url)
|
return getattr(self, 'cover_url', self.cover_url)
|
||||||
|
|
||||||
def print_version(self, url):
|
def print_version(self, url):
|
||||||
if url.count ('focus.pl.feedsportal.com'):
|
if url.count('focus.pl.feedsportal.com'):
|
||||||
u = url.find('focus0Bpl')
|
u = url.find('focus0Bpl')
|
||||||
u = 'http://www.focus.pl/' + url[u + 11:]
|
u = 'http://www.focus.pl/' + url[u + 11:]
|
||||||
u = u.replace('0C', '/')
|
u = u.replace('0C', '/')
|
||||||
u = u.replace('A', '')
|
u = u.replace('A', '')
|
||||||
u = u.replace ('0E','-')
|
u = u.replace('0E', '-')
|
||||||
u = u.replace('/nc/1//story01.htm', '/do-druku/1')
|
u = u.replace('/nc/1//story01.htm', '/do-druku/1')
|
||||||
else:
|
else:
|
||||||
u = url.replace('/nc/1','/do-druku/1')
|
u = url.replace('/nc/1', '/do-druku/1')
|
||||||
return u
|
return u
|
||||||
|
@ -1,104 +1,107 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
|
||||||
class Gazeta_Wyborcza(BasicNewsRecipe):
|
class Gazeta_Wyborcza(BasicNewsRecipe):
|
||||||
title = u'Gazeta Wyborcza'
|
title = u'Gazeta Wyborcza'
|
||||||
__author__ = 'fenuks'
|
__author__ = 'fenuks, Artur Stachecki'
|
||||||
language = 'pl'
|
language = 'pl'
|
||||||
description ='news from gazeta.pl'
|
description = 'news from gazeta.pl'
|
||||||
category='newspaper'
|
category = 'newspaper'
|
||||||
publication_type = 'newspaper'
|
publication_type = 'newspaper'
|
||||||
masthead_url='http://bi.gazeta.pl/im/5/10285/z10285445AA.jpg'
|
masthead_url = 'http://bi.gazeta.pl/im/5/10285/z10285445AA.jpg'
|
||||||
INDEX='http://wyborcza.pl'
|
INDEX = 'http://wyborcza.pl'
|
||||||
remove_empty_feeds= True
|
remove_empty_feeds = True
|
||||||
oldest_article = 3
|
oldest_article = 3
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
remove_javascript=True
|
remove_javascript = True
|
||||||
no_stylesheets=True
|
no_stylesheets = True
|
||||||
ignore_duplicate_articles = {'title', 'url'}
|
remove_tags_before = dict(id='k0')
|
||||||
keep_only_tags = dict(id=['gazeta_article', 'article'])
|
remove_tags_after = dict(id='banP4')
|
||||||
remove_tags_after = dict(id='gazeta_article_share')
|
remove_tags = [dict(name='div', attrs={'class':'rel_box'}), dict(attrs={'class':['date', 'zdjP', 'zdjM', 'pollCont', 'rel_video', 'brand', 'txt_upl']}), dict(name='div', attrs={'id':'footer'})]
|
||||||
remove_tags = [dict(attrs={'class':['artReadMore', 'gazeta_article_related_new', 'txt_upl']}), dict(id=['gazeta_article_likes', 'gazeta_article_tools', 'rel', 'gazeta_article_tags', 'gazeta_article_share', 'gazeta_article_brand', 'gazeta_article_miniatures'])]
|
feeds = [(u'Kraj', u'http://rss.feedsportal.com/c/32739/f/530266/index.rss'), (u'\u015awiat', u'http://rss.feedsportal.com/c/32739/f/530270/index.rss'),
|
||||||
|
(u'Wyborcza.biz', u'http://wyborcza.biz/pub/rss/wyborcza_biz_wiadomosci.htm'),
|
||||||
feeds = [(u'Kraj', u'http://rss.feedsportal.com/c/32739/f/530266/index.rss'), (u'\u015awiat', u'http://rss.feedsportal.com/c/32739/f/530270/index.rss'),
|
(u'Komentarze', u'http://rss.feedsportal.com/c/32739/f/530312/index.rss'),
|
||||||
(u'Wyborcza.biz', u'http://wyborcza.biz/pub/rss/wyborcza_biz_wiadomosci.htm'),
|
(u'Kultura', u'http://rss.gazeta.pl/pub/rss/gazetawyborcza_kultura.xml'),
|
||||||
(u'Komentarze', u'http://rss.feedsportal.com/c/32739/f/530312/index.rss'),
|
(u'Nauka', u'http://rss.feedsportal.com/c/32739/f/530269/index.rss'), (u'Opinie', u'http://rss.gazeta.pl/pub/rss/opinie.xml'), (u'Gazeta \u015awi\u0105teczna', u'http://rss.feedsportal.com/c/32739/f/530431/index.rss'), (u'Du\u017cy Format', u'http://rss.feedsportal.com/c/32739/f/530265/index.rss'), (u'Witamy w Polsce', u'http://rss.feedsportal.com/c/32739/f/530476/index.rss'), (u'M\u0119ska Muzyka', u'http://rss.feedsportal.com/c/32739/f/530337/index.rss'), (u'Lata Lec\u0105', u'http://rss.feedsportal.com/c/32739/f/530326/index.rss'), (u'Solidarni z Tybetem', u'http://rss.feedsportal.com/c/32739/f/530461/index.rss'), (u'W pon. - \u017bakowski', u'http://rss.feedsportal.com/c/32739/f/530491/index.rss'), (u'We wt. - Kolenda-Zalewska', u'http://rss.feedsportal.com/c/32739/f/530310/index.rss'), (u'\u015aroda w \u015brod\u0119', u'http://rss.feedsportal.com/c/32739/f/530428/index.rss'), (u'W pi\u0105tek - Olejnik', u'http://rss.feedsportal.com/c/32739/f/530364/index.rss'), (u'Nekrologi', u'http://rss.feedsportal.com/c/32739/f/530358/index.rss')
|
||||||
(u'Kultura', u'http://rss.gazeta.pl/pub/rss/gazetawyborcza_kultura.xml'),
|
]
|
||||||
(u'Nauka', u'http://rss.feedsportal.com/c/32739/f/530269/index.rss'),
|
|
||||||
(u'Opinie', u'http://rss.gazeta.pl/pub/rss/opinie.xml'),
|
|
||||||
(u'Gazeta \u015awi\u0105teczna', u'http://rss.feedsportal.com/c/32739/f/530431/index.rss'),
|
|
||||||
#(u'Du\u017cy Format', u'http://rss.feedsportal.com/c/32739/f/530265/index.rss'),
|
|
||||||
(u'Witamy w Polsce', u'http://rss.feedsportal.com/c/32739/f/530476/index.rss'),
|
|
||||||
(u'M\u0119ska Muzyka', u'http://rss.feedsportal.com/c/32739/f/530337/index.rss'),
|
|
||||||
(u'Lata Lec\u0105', u'http://rss.feedsportal.com/c/32739/f/530326/index.rss'),
|
|
||||||
(u'Solidarni z Tybetem', u'http://rss.feedsportal.com/c/32739/f/530461/index.rss'),
|
|
||||||
(u'W pon. - \u017bakowski', u'http://rss.feedsportal.com/c/32739/f/530491/index.rss'),
|
|
||||||
(u'We wt. - Kolenda-Zalewska', u'http://rss.feedsportal.com/c/32739/f/530310/index.rss'),
|
|
||||||
(u'\u015aroda w \u015brod\u0119', u'http://rss.feedsportal.com/c/32739/f/530428/index.rss'),
|
|
||||||
(u'W pi\u0105tek - Olejnik', u'http://rss.feedsportal.com/c/32739/f/530364/index.rss')
|
|
||||||
]
|
|
||||||
|
|
||||||
def skip_ad_pages(self, soup):
|
def skip_ad_pages(self, soup):
|
||||||
tag=soup.find(name='a', attrs={'class':'btn'})
|
tag = soup.find(name='a', attrs={'class': 'btn'})
|
||||||
if tag:
|
if tag:
|
||||||
new_soup=self.index_to_soup(tag['href'], raw=True)
|
new_soup = self.index_to_soup(tag['href'], raw=True)
|
||||||
return new_soup
|
return new_soup
|
||||||
|
|
||||||
|
|
||||||
def append_page(self, soup, appendtag):
|
def append_page(self, soup, appendtag):
|
||||||
loop=False
|
loop = False
|
||||||
tag = soup.find('div', attrs={'id':'Str'})
|
tag = soup.find('div', attrs={'id': 'Str'})
|
||||||
if appendtag.find('div', attrs={'id':'Str'}):
|
if appendtag.find('div', attrs={'id': 'Str'}):
|
||||||
nexturl=tag.findAll('a')
|
nexturl = tag.findAll('a')
|
||||||
appendtag.find('div', attrs={'id':'Str'}).extract()
|
appendtag.find('div', attrs={'id': 'Str'}).extract()
|
||||||
loop=True
|
loop = True
|
||||||
if appendtag.find(id='source'):
|
if appendtag.find(id='source'):
|
||||||
appendtag.find(id='source').extract()
|
appendtag.find(id='source').extract()
|
||||||
while loop:
|
while loop:
|
||||||
loop=False
|
loop = False
|
||||||
for link in nexturl:
|
for link in nexturl:
|
||||||
if u'następne' in link.string:
|
if u'następne' in link.string:
|
||||||
url= self.INDEX + link['href']
|
url = self.INDEX + link['href']
|
||||||
soup2 = self.index_to_soup(url)
|
soup2 = self.index_to_soup(url)
|
||||||
pagetext = soup2.find(id='artykul')
|
pagetext = soup2.find(id='artykul')
|
||||||
pos = len(appendtag.contents)
|
pos = len(appendtag.contents)
|
||||||
appendtag.insert(pos, pagetext)
|
appendtag.insert(pos, pagetext)
|
||||||
tag = soup2.find('div', attrs={'id':'Str'})
|
tag = soup2.find('div', attrs={'id': 'Str'})
|
||||||
nexturl=tag.findAll('a')
|
nexturl = tag.findAll('a')
|
||||||
loop=True
|
loop = True
|
||||||
|
|
||||||
def gallery_article(self, appendtag):
|
def gallery_article(self, appendtag):
|
||||||
tag=appendtag.find(id='container_gal')
|
tag = appendtag.find(id='container_gal')
|
||||||
if tag:
|
if tag:
|
||||||
nexturl=appendtag.find(id='gal_btn_next').a['href']
|
nexturl = appendtag.find(id='gal_btn_next').a['href']
|
||||||
appendtag.find(id='gal_navi').extract()
|
appendtag.find(id='gal_navi').extract()
|
||||||
while nexturl:
|
while nexturl:
|
||||||
soup2=self.index_to_soup(nexturl)
|
soup2 = self.index_to_soup(nexturl)
|
||||||
pagetext=soup2.find(id='container_gal')
|
pagetext = soup2.find(id='container_gal')
|
||||||
nexturl=pagetext.find(id='gal_btn_next')
|
nexturl = pagetext.find(id='gal_btn_next')
|
||||||
if nexturl:
|
if nexturl:
|
||||||
nexturl=nexturl.a['href']
|
nexturl = nexturl.a['href']
|
||||||
pos = len(appendtag.contents)
|
pos = len(appendtag.contents)
|
||||||
appendtag.insert(pos, pagetext)
|
appendtag.insert(pos, pagetext)
|
||||||
rem=appendtag.find(id='gal_navi')
|
rem = appendtag.find(id='gal_navi')
|
||||||
if rem:
|
if rem:
|
||||||
rem.extract()
|
rem.extract()
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
self.append_page(soup, soup.body)
|
if soup.find(attrs={'class': 'piano_btn_1'}):
|
||||||
if soup.find(id='container_gal'):
|
return None
|
||||||
self.gallery_article(soup.body)
|
else:
|
||||||
return soup
|
self.append_page(soup, soup.body)
|
||||||
|
if soup.find(id='container_gal'):
|
||||||
|
self.gallery_article(soup.body)
|
||||||
|
return soup
|
||||||
|
|
||||||
def print_version(self, url):
|
def print_version(self, url):
|
||||||
if 'http://wyborcza.biz/biznes/' not in url:
|
if url.count('rss.feedsportal.com'):
|
||||||
return url
|
u = url.find('wyborcza0Bpl')
|
||||||
|
u = 'http://www.wyborcza.pl/' + url[u + 11:]
|
||||||
|
u = u.replace('0C', '/')
|
||||||
|
u = u.replace('A', '')
|
||||||
|
u = u.replace('0E', '-')
|
||||||
|
u = u.replace('0H', ',')
|
||||||
|
u = u.replace('0I', '_')
|
||||||
|
u = u.replace('0B', '.')
|
||||||
|
u = u.replace('/1,', '/2029020,')
|
||||||
|
u = u.replace('/story01.htm', '')
|
||||||
|
print(u)
|
||||||
|
return u
|
||||||
|
elif 'http://wyborcza.pl/1' in url:
|
||||||
|
return url.replace('http://wyborcza.pl/1', 'http://wyborcza.pl/2029020')
|
||||||
else:
|
else:
|
||||||
return url.replace('http://wyborcza.biz/biznes/1', 'http://wyborcza.biz/biznes/2029020')
|
return url.replace('http://wyborcza.biz/biznes/1', 'http://wyborcza.biz/biznes/2029020')
|
||||||
|
|
||||||
def get_cover_url(self):
|
def get_cover_url(self):
|
||||||
soup = self.index_to_soup('http://wyborcza.pl/0,76762,3751429.html')
|
soup = self.index_to_soup('http://wyborcza.pl/0,76762,3751429.html')
|
||||||
cover=soup.find(id='GWmini2')
|
cover = soup.find(id='GWmini2')
|
||||||
soup = self.index_to_soup('http://wyborcza.pl/'+ cover.contents[3].a['href'])
|
soup = self.index_to_soup('http://wyborcza.pl/' + cover.contents[3].a['href'])
|
||||||
self.cover_url='http://wyborcza.pl' + soup.img['src']
|
self.cover_url = 'http://wyborcza.pl' + soup.img['src']
|
||||||
return getattr(self, 'cover_url', self.cover_url)
|
return getattr(self, 'cover_url', self.cover_url)
|
||||||
|
BIN
recipes/icons/mateusz_czytania.png
Normal file
BIN
recipes/icons/mateusz_czytania.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.1 KiB |
BIN
recipes/icons/rushisaband.png
Normal file
BIN
recipes/icons/rushisaband.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 965 B |
BIN
recipes/icons/rynek_infrastruktury.png
Normal file
BIN
recipes/icons/rynek_infrastruktury.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 820 B |
BIN
recipes/icons/rynek_kolejowy.png
Normal file
BIN
recipes/icons/rynek_kolejowy.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 330 B |
BIN
recipes/icons/satkurier.png
Normal file
BIN
recipes/icons/satkurier.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.2 KiB |
34
recipes/kerrang.recipe
Normal file
34
recipes/kerrang.recipe
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class kerrang(BasicNewsRecipe):
|
||||||
|
title = u'Kerrang!'
|
||||||
|
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
|
language = 'en_GB'
|
||||||
|
description = u'UK-based magazine devoted to rock music published by Bauer Media Group'
|
||||||
|
oldest_article = 7
|
||||||
|
masthead_url = 'http://images.kerrang.com/design/kerrang/kerrangsite/logo.gif'
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
simultaneous_downloads = 5
|
||||||
|
remove_javascript = True
|
||||||
|
no_stylesheets = True
|
||||||
|
use_embedded_content = False
|
||||||
|
recursions = 0
|
||||||
|
|
||||||
|
keep_only_tags = []
|
||||||
|
keep_only_tags.append(dict(attrs = {'class' : ['headz', 'blktxt']}))
|
||||||
|
|
||||||
|
extra_css = ''' img { display: block; margin-right: auto;}
|
||||||
|
h1 {text-align: left; font-size: 22px;}'''
|
||||||
|
|
||||||
|
feeds = [(u'News', u'http://www.kerrang.com/blog/rss.xml')]
|
||||||
|
|
||||||
|
def preprocess_html(self, soup):
|
||||||
|
for alink in soup.findAll('a'):
|
||||||
|
if alink.string is not None:
|
||||||
|
tstr = alink.string
|
||||||
|
alink.replaceWith(tstr)
|
||||||
|
return soup
|
45
recipes/lequipe.recipe
Normal file
45
recipes/lequipe.recipe
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
|
||||||
|
class leequipe(BasicNewsRecipe):
|
||||||
|
title = u'l\'equipe'
|
||||||
|
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
|
language = 'fr'
|
||||||
|
description = u'Retrouvez tout le sport en direct sur le site de L\'EQUIPE et suivez l\'actualité du football, rugby, basket, cyclisme, f1, volley, hand, tous les résultats sportifs'
|
||||||
|
oldest_article = 1
|
||||||
|
masthead_url = 'http://static.lequipe.fr/v6/img/logo-lequipe.png'
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
simultaneous_downloads = 5
|
||||||
|
remove_javascript = True
|
||||||
|
no_stylesheets = True
|
||||||
|
use_embedded_content = False
|
||||||
|
recursions = 0
|
||||||
|
|
||||||
|
keep_only_tags = []
|
||||||
|
keep_only_tags.append(dict(attrs={'id': ['article']}))
|
||||||
|
|
||||||
|
remove_tags = []
|
||||||
|
remove_tags.append(dict(attrs={'id': ['partage', 'ensavoirplus', 'bloc_bas_breve', 'commentaires', 'tools']}))
|
||||||
|
remove_tags.append(dict(attrs={'class': ['partage_bis', 'date']}))
|
||||||
|
|
||||||
|
feeds = [(u'Football', u'http://www.lequipe.fr/rss/actu_rss_Football.xml'),
|
||||||
|
(u'Auto-Moto', u'http://www.lequipe.fr/rss/actu_rss_Auto-Moto.xml'),
|
||||||
|
(u'Tennis', u'http://www.lequipe.fr/rss/actu_rss_Tennis.xml'),
|
||||||
|
(u'Golf', u'http://www.lequipe.fr/rss/actu_rss_Golf.xml'),
|
||||||
|
(u'Rugby', u'http://www.lequipe.fr/rss/actu_rss_Rugby.xml'),
|
||||||
|
(u'Basket', u'http://www.lequipe.fr/rss/actu_rss_Basket.xml'),
|
||||||
|
(u'Hand', u'http://www.lequipe.fr/rss/actu_rss_Hand.xml'),
|
||||||
|
(u'Cyclisme', u'http://www.lequipe.fr/rss/actu_rss_Cyclisme.xml'),
|
||||||
|
(u'Autres Sports', u'http://pipes.yahoo.com/pipes/pipe.run?_id=2039f7f4f350c70c5e4e8633aa1b37cd&_render=rss')
|
||||||
|
]
|
||||||
|
|
||||||
|
def preprocess_html(self, soup):
|
||||||
|
for alink in soup.findAll('a'):
|
||||||
|
if alink.string is not None:
|
||||||
|
tstr = alink.string
|
||||||
|
alink.replaceWith(tstr)
|
||||||
|
return soup
|
@ -40,6 +40,6 @@ class LondonReviewOfBooks(BasicNewsRecipe):
|
|||||||
soup = self.index_to_soup('http://www.lrb.co.uk/')
|
soup = self.index_to_soup('http://www.lrb.co.uk/')
|
||||||
cover_item = soup.find('p',attrs={'class':'cover'})
|
cover_item = soup.find('p',attrs={'class':'cover'})
|
||||||
if cover_item:
|
if cover_item:
|
||||||
cover_url = 'http://www.lrb.co.uk' + cover_item.a.img['src']
|
cover_url = cover_item.a.img['src']
|
||||||
return cover_url
|
return cover_url
|
||||||
|
|
||||||
|
36
recipes/mateusz_czytania.recipe
Normal file
36
recipes/mateusz_czytania.recipe
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
|
||||||
|
'''
|
||||||
|
http://www.mateusz.pl/czytania
|
||||||
|
'''
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class czytania_mateusz(BasicNewsRecipe):
|
||||||
|
title = u'Czytania na ka\u017cdy dzie\u0144'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
description = u'Codzienne czytania z jednego z najstarszych polskich serwisów katolickich.'
|
||||||
|
language = 'pl'
|
||||||
|
INDEX='http://www.mateusz.pl/czytania'
|
||||||
|
oldest_article = 1
|
||||||
|
remove_empty_feeds= True
|
||||||
|
no_stylesheets=True
|
||||||
|
auto_cleanup = True
|
||||||
|
remove_javascript = True
|
||||||
|
simultaneous_downloads = 2
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
|
||||||
|
feeds = [(u'Czytania', u'http://mateusz.pl/rss/czytania/')]
|
||||||
|
|
||||||
|
remove_tags =[]
|
||||||
|
remove_tags.append(dict(name = 'p', attrs = {'class' : 'top'}))
|
||||||
|
|
||||||
|
#thanks t3d
|
||||||
|
def get_article_url(self, article):
|
||||||
|
link = article.get('link')
|
||||||
|
if 'kmt.pl' not in link:
|
||||||
|
return link
|
@ -4,7 +4,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
|
|||||||
|
|
||||||
class FocusRecipe(BasicNewsRecipe):
|
class FocusRecipe(BasicNewsRecipe):
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__author__ = u'intromatyk <intromatyk@gmail.com>'
|
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
language = 'pl'
|
language = 'pl'
|
||||||
version = 1
|
version = 1
|
||||||
|
|
||||||
|
61
recipes/naszdziennik.recipe
Normal file
61
recipes/naszdziennik.recipe
Normal file
@ -0,0 +1,61 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class naszdziennik(BasicNewsRecipe):
|
||||||
|
title = u'Nasz Dziennik'
|
||||||
|
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
|
language = 'pl'
|
||||||
|
description =u'Nasz Dziennik - Ogólnopolska gazeta codzienna. Podejmuje tematykę dotyczącą życia społecznego, kulturalnego, politycznego i religijnego. Propaguje wartości chrześcijańskie oraz tradycję i kulturę polską.'
|
||||||
|
masthead_url='http://www.naszdziennik.pl/images/logo-male.png'
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
remove_javascript=True
|
||||||
|
no_stylesheets = True
|
||||||
|
|
||||||
|
keep_only_tags =[dict(attrs = {'id' : 'article'})]
|
||||||
|
|
||||||
|
#definiujemy nową funkcje; musi zwracać listę feedów wraz z artykułami
|
||||||
|
def parse_index(self):
|
||||||
|
#adres do parsowania artykułów
|
||||||
|
soup = self.index_to_soup('http://www.naszdziennik.pl/news')
|
||||||
|
#deklaracja pustej listy feedów
|
||||||
|
feeds = []
|
||||||
|
#deklaracja pustego słownika artykułów
|
||||||
|
articles = {}
|
||||||
|
#deklaracja pustej listy sekcji
|
||||||
|
sections = []
|
||||||
|
#deklaracja pierwszej sekcji jako pusty string
|
||||||
|
section = ''
|
||||||
|
|
||||||
|
#pętla for, która analizuje po kolei każdy tag "news-article"
|
||||||
|
for item in soup.findAll(attrs = {'class' : 'news-article'}) :
|
||||||
|
#w tagu "news-article szukamy pierwszego taga h4"
|
||||||
|
section = item.find('h4')
|
||||||
|
#zmiennej sekcja przypisujemy zawartość tekstową taga
|
||||||
|
section = self.tag_to_string(section)
|
||||||
|
#sprawdzamy czy w słowniku artykułów istnieje klucz dotyczący sekcji
|
||||||
|
#jeśli nie istnieje to :
|
||||||
|
if not articles.has_key(section) :
|
||||||
|
#do listy sekcji dodajemy nową sekcje
|
||||||
|
sections.append(section)
|
||||||
|
#deklarujemy nową sekcje w słowniku artykułów przypisując jej klucz odpowiadający nowej sekcji, którego wartością jest pusta lista
|
||||||
|
articles[section] = []
|
||||||
|
#przeszukujemy kolejny tag "title-datetime"
|
||||||
|
article_title_datetime = item.find(attrs = {'class' : 'title-datetime'})
|
||||||
|
#w tagu title-datetime znajdujemy pierwszy link
|
||||||
|
article_a = article_title_datetime.find('a')
|
||||||
|
#i tworzymy z niego link absolutny do właściwego artykułu
|
||||||
|
article_url = 'http://naszdziennik.pl' + article_a['href']
|
||||||
|
#jako tytuł użyty będzie tekst pomiędzy tagami <a>
|
||||||
|
article_title = self.tag_to_string(article_a)
|
||||||
|
#a data będzie tekstem z pierwszego taga h4 znalezionego w tagu title-datetime
|
||||||
|
article_date = self.tag_to_string(article_title_datetime.find('h4'))
|
||||||
|
#zebrane elementy dodajemy do listy zadeklarowanej w linijce 44
|
||||||
|
articles[section].append( { 'title' : article_title, 'url' : article_url, 'date' : article_date })
|
||||||
|
#po dodaniu wszystkich artykułów dodajemy sekcje do listy feedów, korzystając z list sekcji znajdujących się w słowniku
|
||||||
|
for section in sections:
|
||||||
|
feeds.append((section, articles[section]))
|
||||||
|
#zwracamy listę feedów, której parsowaniem zajmie się calibre
|
||||||
|
return feeds
|
28
recipes/rushisaband.recipe
Normal file
28
recipes/rushisaband.recipe
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__author__ = 'MrStefan <mrstefaan@gmail.com>'
|
||||||
|
|
||||||
|
'''
|
||||||
|
www.rushisaband.com
|
||||||
|
'''
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class rushisaband(BasicNewsRecipe):
|
||||||
|
title = u'Rushisaband'
|
||||||
|
__author__ = 'MrStefan <mrstefaan@gmail.com>'
|
||||||
|
language = 'en_GB'
|
||||||
|
description =u'A blog devoted to the band RUSH and its members, Neil Peart, Geddy Lee and Alex Lifeson'
|
||||||
|
remove_empty_feeds= True
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
remove_javascript=True
|
||||||
|
no_stylesheets=True
|
||||||
|
|
||||||
|
keep_only_tags =[]
|
||||||
|
keep_only_tags.append(dict(name = 'h4'))
|
||||||
|
keep_only_tags.append(dict(name = 'h5'))
|
||||||
|
keep_only_tags.append(dict(name = 'p'))
|
||||||
|
|
||||||
|
feeds = [(u'Rush is a Band', u'http://feeds2.feedburner.com/rushisaband/blog')]
|
41
recipes/rynek_infrastruktury.recipe
Normal file
41
recipes/rynek_infrastruktury.recipe
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
|
||||||
|
'''
|
||||||
|
http://www.rynekinfrastruktury.pl
|
||||||
|
'''
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class prawica_recipe(BasicNewsRecipe):
|
||||||
|
title = u'Rynek Infrastruktury'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
language = 'pl'
|
||||||
|
description =u'Portal "Rynek Infrastruktury" to źródło informacji o kluczowych elementach polskiej gospodarki: drogach, kolei, lotniskach, portach, telekomunikacji, energetyce, prawie i polityce, wzmocnione eksperckimi komentarzami kluczowych analityków.'
|
||||||
|
remove_empty_feeds= True
|
||||||
|
oldest_article = 1
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
remove_javascript=True
|
||||||
|
no_stylesheets=True
|
||||||
|
|
||||||
|
feeds = [
|
||||||
|
(u'Drogi', u'http://www.rynekinfrastruktury.pl/rss/41'),
|
||||||
|
(u'Lotniska', u'http://www.rynekinfrastruktury.pl/rss/42'),
|
||||||
|
(u'Kolej', u'http://www.rynekinfrastruktury.pl/rss/37'),
|
||||||
|
(u'Energetyka', u'http://www.rynekinfrastruktury.pl/rss/30'),
|
||||||
|
(u'Telekomunikacja', u'http://www.rynekinfrastruktury.pl/rss/31'),
|
||||||
|
(u'Porty', u'http://www.rynekinfrastruktury.pl/rss/32'),
|
||||||
|
(u'Prawo i polityka', u'http://www.rynekinfrastruktury.pl/rss/47'),
|
||||||
|
(u'Komentarze', u'http://www.rynekinfrastruktury.pl/rss/38'),
|
||||||
|
]
|
||||||
|
|
||||||
|
keep_only_tags =[]
|
||||||
|
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'articleContent'}))
|
||||||
|
|
||||||
|
remove_tags =[]
|
||||||
|
remove_tags.append(dict(name = 'span', attrs = {'class' : 'date'}))
|
||||||
|
|
||||||
|
def print_version(self, url):
|
||||||
|
return url.replace('http://www.rynekinfrastruktury.pl/artykul/', 'http://www.rynekinfrastruktury.pl/artykul/drukuj/')
|
40
recipes/rynek_kolejowy.recipe
Normal file
40
recipes/rynek_kolejowy.recipe
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
|
||||||
|
'''
|
||||||
|
rynek-kolejowy.pl
|
||||||
|
'''
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class rynek_kolejowy(BasicNewsRecipe):
|
||||||
|
title = u'Rynek Kolejowy'
|
||||||
|
__author__ = 'teepel <teepel44@gmail.com>'
|
||||||
|
language = 'pl'
|
||||||
|
description =u'Rynek Kolejowy - kalendarium wydarzeń branży kolejowej, konferencje, sympozja, targi kolejowe, krajowe i zagraniczne.'
|
||||||
|
masthead_url='http://p.wnp.pl/images/i/partners/rynek_kolejowy.gif'
|
||||||
|
remove_empty_feeds= True
|
||||||
|
oldest_article = 1
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
remove_javascript=True
|
||||||
|
no_stylesheets=True
|
||||||
|
|
||||||
|
keep_only_tags =[]
|
||||||
|
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'mainContent'}))
|
||||||
|
|
||||||
|
remove_tags =[]
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'right no-print'}))
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'font-size'}))
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'no-print'}))
|
||||||
|
|
||||||
|
extra_css = '''.wiadomosc_title{ font-size: 1.4em; font-weight: bold; }'''
|
||||||
|
|
||||||
|
feeds = [(u'Wiadomości', u'http://www.rynek-kolejowy.pl/rss/rss.php')]
|
||||||
|
|
||||||
|
def print_version(self, url):
|
||||||
|
segment = url.split('/')
|
||||||
|
urlPart = segment[3]
|
||||||
|
return 'http://www.rynek-kolejowy.pl/drukuj.php?id=' + urlPart
|
||||||
|
|
@ -34,16 +34,20 @@ class RzeczpospolitaRecipe(BasicNewsRecipe):
|
|||||||
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'story'}))
|
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'story'}))
|
||||||
|
|
||||||
remove_tags =[]
|
remove_tags =[]
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'articleLeftBox'}))
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'socialNewTools'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'socialTools'}))
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'socialTools'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'articleToolBoxTop'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'articleToolBoxTop'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'clr'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'clr'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'recommendations'}))
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'recommendations'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'editorPicks'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'editorPicks'}))
|
||||||
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'editorPicks editorPicksFirst'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'articleCopyrightText'}))
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'articleCopyrightText'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'articleCopyrightButton'}))
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'articleCopyrightButton'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'articleToolBoxBottom'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'articleToolBoxBottom'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'more'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'more'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'addRecommendation'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'addRecommendation'}))
|
||||||
|
remove_tags.append(dict(name = 'h3', attrs = {'id' : 'tags'}))
|
||||||
|
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
body {font-family: verdana, arial, helvetica, geneva, sans-serif ;}
|
body {font-family: verdana, arial, helvetica, geneva, sans-serif ;}
|
||||||
@ -67,3 +71,4 @@ class RzeczpospolitaRecipe(BasicNewsRecipe):
|
|||||||
|
|
||||||
return start + '/' + index + '?print=tak'
|
return start + '/' + index + '?print=tak'
|
||||||
|
|
||||||
|
|
||||||
|
47
recipes/satkurier.recipe
Normal file
47
recipes/satkurier.recipe
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class SATKurier(BasicNewsRecipe):
|
||||||
|
title = u'SATKurier.pl'
|
||||||
|
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
|
language = 'pl'
|
||||||
|
description = u'Największy i najstarszy serwis poświęcony\
|
||||||
|
telewizji cyfrowej, przygotowywany przez wydawcę\
|
||||||
|
miesięcznika SAT Kurier. Bieżące wydarzenia\
|
||||||
|
z rynku mediów i nowych technologii.'
|
||||||
|
oldest_article = 7
|
||||||
|
masthead_url = 'http://satkurier.pl/img/header_sk_logo.gif'
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
simultaneous_downloads = 5
|
||||||
|
remove_javascript = True
|
||||||
|
no_stylesheets = True
|
||||||
|
|
||||||
|
keep_only_tags = []
|
||||||
|
keep_only_tags.append(dict(name='div', attrs={'id': ['single_news', 'content']}))
|
||||||
|
|
||||||
|
remove_tags = []
|
||||||
|
remove_tags.append(dict(attrs={'id': ['news_info', 'comments']}))
|
||||||
|
remove_tags.append(dict(attrs={'href': '#czytaj'}))
|
||||||
|
remove_tags.append(dict(attrs={'align': 'center'}))
|
||||||
|
remove_tags.append(dict(attrs={'class': ['date', 'category', 'right mini-add-comment', 'socialLinks', 'commentlist']}))
|
||||||
|
|
||||||
|
remove_tags_after = [(dict(id='entry'))]
|
||||||
|
|
||||||
|
feeds = [(u'Najnowsze wiadomości', u'http://feeds.feedburner.com/satkurierpl?format=xml'),
|
||||||
|
(u'Sport w telewizji', u'http://feeds.feedburner.com/satkurier/sport?format=xml'),
|
||||||
|
(u'Blog', u'http://feeds.feedburner.com/satkurier/blog?format=xml')]
|
||||||
|
|
||||||
|
def preprocess_html(self, soup):
|
||||||
|
image = soup.find(attrs={'id': 'news_mini_photo'})
|
||||||
|
if image:
|
||||||
|
image.extract()
|
||||||
|
header = soup.find('h1')
|
||||||
|
header.replaceWith(header.prettify() + image.prettify())
|
||||||
|
for alink in soup.findAll('a'):
|
||||||
|
if alink.string is not None:
|
||||||
|
tstr = alink.string
|
||||||
|
alink.replaceWith(tstr)
|
||||||
|
return soup
|
@ -1,34 +1,50 @@
|
|||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
from calibre.utils.magick import Image
|
||||||
class tvn24(BasicNewsRecipe):
|
class tvn24(BasicNewsRecipe):
|
||||||
title = u'TVN24'
|
title = u'TVN24'
|
||||||
oldest_article = 7
|
oldest_article = 7
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
__author__ = 'fenuks'
|
__author__ = 'fenuks, Artur Stachecki'
|
||||||
description = u'Sport, Biznes, Gospodarka, Informacje, Wiadomości Zawsze aktualne wiadomości z Polski i ze świata'
|
description = u'Sport, Biznes, Gospodarka, Informacje, Wiadomości Zawsze aktualne wiadomości z Polski i ze świata'
|
||||||
category = 'news'
|
category = 'news'
|
||||||
language = 'pl'
|
language = 'pl'
|
||||||
#masthead_url= 'http://www.tvn24.pl/_d/topmenu/logo2.gif'
|
masthead_url= 'http://www.tvn24.pl/_d/topmenu/logo2.gif'
|
||||||
cover_url= 'http://www.userlogos.org/files/logos/Struna/TVN24.jpg'
|
cover_url= 'http://www.tvn24.pl/_d/topmenu/logo2.gif'
|
||||||
extra_css = 'ul {list-style:none;} \
|
extra_css= 'ul {list-style: none; padding: 0; margin: 0;} li {float: left;margin: 0 0.15em;}'
|
||||||
li {list-style:none; float: left; margin: 0 0.15em;} \
|
|
||||||
h2 {font-size: medium} \
|
|
||||||
.date60m {float: left; margin: 0 10px 0 5px;}'
|
|
||||||
remove_empty_feeds = True
|
remove_empty_feeds = True
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
use_embedded_content = False
|
keep_only_tags=[
|
||||||
ignore_duplicate_articles = {'title', 'url'}
|
# dict(name='h1', attrs={'class':'size38 mt20 pb20'}),
|
||||||
keep_only_tags=[dict(name='h1', attrs={'class':['size30 mt10 pb10', 'size38 mt10 pb15']}), dict(name='figure', attrs={'class':'articleMainPhoto articleMainPhotoWide'}), dict(name='article', attrs={'class':['mb20', 'mb20 textArticleDefault']}), dict(name='ul', attrs={'class':'newsItem'})]
|
dict(name='div', attrs={'class':'mainContainer'}),
|
||||||
remove_tags = [dict(name='aside', attrs={'class':['innerArticleModule onRight cols externalContent', 'innerArticleModule center']}), dict(name='div', attrs={'class':['thumbsGallery', 'articleTools', 'article right rd7', 'heading', 'quizContent']}), dict(name='a', attrs={'class':'watchMaterial text'}), dict(name='section', attrs={'class':['quiz toCenter', 'quiz toRight']})]
|
# dict(name='p'),
|
||||||
|
# dict(attrs={'class':['size18 mt10 mb15', 'bold topicSize1', 'fromUsers content', 'textArticleDefault']})
|
||||||
feeds = [(u'Najnowsze', u'http://www.tvn24.pl/najnowsze.xml'),
|
]
|
||||||
(u'Polska', u'www.tvn24.pl/polska.xml'), (u'\u015awiat', u'http://www.tvn24.pl/swiat.xml'), (u'Sport', u'http://www.tvn24.pl/sport.xml'), (u'Biznes', u'http://www.tvn24.pl/biznes.xml'), (u'Meteo', u'http://www.tvn24.pl/meteo.xml'), (u'Micha\u0142ki', u'http://www.tvn24.pl/michalki.xml'), (u'Kultura', u'http://www.tvn24.pl/kultura.xml')]
|
remove_tags=[
|
||||||
|
dict(attrs={'class':['commentsInfo', 'textSize', 'related newsNews align-right', 'box', 'watchMaterial text', 'related galleryGallery align-center', 'advert block-alignment-right', 'userActions', 'socialBookmarks', 'im yourArticle fl', 'dynamicButton addComment fl', 'innerArticleModule onRight cols externalContent', 'thumbsGallery', 'relatedObject customBlockquote align-right', 'lead', 'mainRightColumn', 'articleDateContainer borderGreyBottom', 'socialMediaContainer onRight loaded', 'quizContent', 'twitter', 'facebook', 'googlePlus', 'share', 'voteResult', 'reportTitleBar bgBlue_v4 mb15', 'innerVideoModule center']}),
|
||||||
|
dict(name='article', attrs={'class':['singleArtPhotoCenter', 'singleArtPhotoRight', 'singleArtPhotoLeft']}),
|
||||||
|
dict(name='section', attrs={'id':['forum', 'innerArticle', 'quiz toCenter', 'mb20']}),
|
||||||
|
dict(name='div', attrs={'class':'socialMediaContainer big p20 mb20 borderGrey loaded'})
|
||||||
|
]
|
||||||
|
remove_tags_after=[dict(name='li', attrs={'class':'share'})]
|
||||||
|
feeds = [(u'Najnowsze', u'http://www.tvn24.pl/najnowsze.xml'), ]
|
||||||
|
#(u'Polska', u'www.tvn24.pl/polska.xml'), (u'\u015awiat', u'http://www.tvn24.pl/swiat.xml'), (u'Sport', u'http://www.tvn24.pl/sport.xml'), (u'Biznes', u'http://www.tvn24.pl/biznes.xml'), (u'Meteo', u'http://www.tvn24.pl/meteo.xml'), (u'Micha\u0142ki', u'http://www.tvn24.pl/michalki.xml'), (u'Kultura', u'http://www.tvn24.pl/kultura.xml')]
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
for item in soup.findAll(style=True):
|
for alink in soup.findAll('a'):
|
||||||
del item['style']
|
if alink.string is not None:
|
||||||
tag = soup.find(name='ul', attrs={'class':'newsItem'})
|
tstr = alink.string
|
||||||
if tag:
|
alink.replaceWith(tstr)
|
||||||
tag.name='div'
|
return soup
|
||||||
tag.li.name='div'
|
|
||||||
|
def postprocess_html(self, soup, first):
|
||||||
|
#process all the images
|
||||||
|
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
|
||||||
|
iurl = tag['src']
|
||||||
|
img = Image()
|
||||||
|
img.open(iurl)
|
||||||
|
if img < 0:
|
||||||
|
raise RuntimeError('Out of memory')
|
||||||
|
img.type = "GrayscaleType"
|
||||||
|
img.save(iurl)
|
||||||
return soup
|
return soup
|
||||||
|
@ -3,6 +3,8 @@
|
|||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__copyright__ = '2010, matek09, matek09@gmail.com'
|
__copyright__ = '2010, matek09, matek09@gmail.com'
|
||||||
__copyright__ = 'Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>'
|
__copyright__ = 'Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>'
|
||||||
|
__copyright__ = 'Modified 2012, Artur Stachecki <artur.stachecki@gmail.com>'
|
||||||
|
|
||||||
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
import re
|
import re
|
||||||
@ -11,7 +13,7 @@ class Wprost(BasicNewsRecipe):
|
|||||||
EDITION = 0
|
EDITION = 0
|
||||||
FIND_LAST_FULL_ISSUE = True
|
FIND_LAST_FULL_ISSUE = True
|
||||||
EXCLUDE_LOCKED = True
|
EXCLUDE_LOCKED = True
|
||||||
ICO_BLOCKED = 'http://www.wprost.pl/G/icons/ico_blocked.gif'
|
ICO_BLOCKED = 'http://www.wprost.pl/G/layout2/ico_blocked.png'
|
||||||
|
|
||||||
title = u'Wprost'
|
title = u'Wprost'
|
||||||
__author__ = 'matek09'
|
__author__ = 'matek09'
|
||||||
@ -20,6 +22,7 @@ class Wprost(BasicNewsRecipe):
|
|||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
language = 'pl'
|
language = 'pl'
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
|
recursions = 0
|
||||||
|
|
||||||
remove_tags_before = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
|
remove_tags_before = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
|
||||||
remove_tags_after = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
|
remove_tags_after = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
|
||||||
@ -35,13 +38,15 @@ class Wprost(BasicNewsRecipe):
|
|||||||
(re.compile(r'\<td\>\<tr\>\<\/table\>'), lambda match: ''),
|
(re.compile(r'\<td\>\<tr\>\<\/table\>'), lambda match: ''),
|
||||||
(re.compile(r'\<table .*?\>'), lambda match: ''),
|
(re.compile(r'\<table .*?\>'), lambda match: ''),
|
||||||
(re.compile(r'\<tr>'), lambda match: ''),
|
(re.compile(r'\<tr>'), lambda match: ''),
|
||||||
(re.compile(r'\<td .*?\>'), lambda match: '')]
|
(re.compile(r'\<td .*?\>'), lambda match: ''),
|
||||||
|
(re.compile(r'\<div id="footer"\>.*?\</footer\>'), lambda match: '')]
|
||||||
|
|
||||||
remove_tags =[]
|
remove_tags =[]
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'def element-date'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'def element-date'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'class' : 'def silver'}))
|
remove_tags.append(dict(name = 'div', attrs = {'class' : 'def silver'}))
|
||||||
remove_tags.append(dict(name = 'div', attrs = {'id' : 'content-main-column-right'}))
|
remove_tags.append(dict(name = 'div', attrs = {'id' : 'content-main-column-right'}))
|
||||||
|
|
||||||
|
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
.div-header {font-size: x-small; font-weight: bold}
|
.div-header {font-size: x-small; font-weight: bold}
|
||||||
'''
|
'''
|
||||||
@ -59,27 +64,26 @@ class Wprost(BasicNewsRecipe):
|
|||||||
a = 0
|
a = 0
|
||||||
if self.FIND_LAST_FULL_ISSUE:
|
if self.FIND_LAST_FULL_ISSUE:
|
||||||
ico_blocked = soup.findAll('img', attrs={'src' : self.ICO_BLOCKED})
|
ico_blocked = soup.findAll('img', attrs={'src' : self.ICO_BLOCKED})
|
||||||
a = ico_blocked[-1].findNext('a', attrs={'title' : re.compile('Zobacz spis tre.ci')})
|
a = ico_blocked[-1].findNext('a', attrs={'title' : re.compile(r'Spis *', re.IGNORECASE | re.DOTALL)})
|
||||||
else:
|
else:
|
||||||
a = soup.find('a', attrs={'title' : re.compile('Zobacz spis tre.ci')})
|
a = soup.find('a', attrs={'title' : re.compile(r'Spis *', re.IGNORECASE | re.DOTALL)})
|
||||||
self.EDITION = a['href'].replace('/tygodnik/?I=', '')
|
self.EDITION = a['href'].replace('/tygodnik/?I=', '')
|
||||||
self.cover_url = a.img['src']
|
self.EDITION_SHORT = a['href'].replace('/tygodnik/?I=15', '')
|
||||||
|
self.cover_url = a.img['src']
|
||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
self.find_last_issue()
|
self.find_last_issue()
|
||||||
soup = self.index_to_soup('http://www.wprost.pl/tygodnik/?I=' + self.EDITION)
|
soup = self.index_to_soup('http://www.wprost.pl/tygodnik/?I=' + self.EDITION)
|
||||||
feeds = []
|
feeds = []
|
||||||
for main_block in soup.findAll(attrs={'class':'main-block-s3 s3-head head-red3'}):
|
for main_block in soup.findAll(attrs={'id': 'content-main-column-element-content'}):
|
||||||
articles = list(self.find_articles(main_block))
|
articles = list(self.find_articles(main_block))
|
||||||
if len(articles) > 0:
|
if len(articles) > 0:
|
||||||
section = self.tag_to_string(main_block)
|
section = self.tag_to_string(main_block.find('h3'))
|
||||||
feeds.append((section, articles))
|
feeds.append((section, articles))
|
||||||
return feeds
|
return feeds
|
||||||
|
|
||||||
def find_articles(self, main_block):
|
def find_articles(self, main_block):
|
||||||
for a in main_block.findAllNext( attrs={'style':['','padding-top: 15px;']}):
|
for a in main_block.findAll('a'):
|
||||||
if a.name in "td":
|
if a.name in "td":
|
||||||
break
|
break
|
||||||
if self.EXCLUDE_LOCKED & self.is_blocked(a):
|
if self.EXCLUDE_LOCKED & self.is_blocked(a):
|
||||||
@ -91,3 +95,4 @@ class Wprost(BasicNewsRecipe):
|
|||||||
'description' : ''
|
'description' : ''
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -11,7 +11,6 @@ let g:syntastic_cpp_include_dirs = [
|
|||||||
\'/usr/include/freetype2',
|
\'/usr/include/freetype2',
|
||||||
\'/usr/include/fontconfig',
|
\'/usr/include/fontconfig',
|
||||||
\'src/qtcurve/common', 'src/qtcurve',
|
\'src/qtcurve/common', 'src/qtcurve',
|
||||||
\'src/sfntly/src', 'src/sfntly/src/sample',
|
|
||||||
\'/usr/include/ImageMagick',
|
\'/usr/include/ImageMagick',
|
||||||
\]
|
\]
|
||||||
let g:syntastic_c_include_dirs = g:syntastic_cpp_include_dirs
|
let g:syntastic_c_include_dirs = g:syntastic_cpp_include_dirs
|
||||||
|
@ -19,7 +19,6 @@ from setup.build_environment import (chmlib_inc_dirs,
|
|||||||
magick_libs, chmlib_lib_dirs, sqlite_inc_dirs, icu_inc_dirs,
|
magick_libs, chmlib_lib_dirs, sqlite_inc_dirs, icu_inc_dirs,
|
||||||
icu_lib_dirs, win_ddk_lib_dirs, ft_libs, ft_lib_dirs, ft_inc_dirs,
|
icu_lib_dirs, win_ddk_lib_dirs, ft_libs, ft_lib_dirs, ft_inc_dirs,
|
||||||
zlib_libs, zlib_lib_dirs, zlib_inc_dirs)
|
zlib_libs, zlib_lib_dirs, zlib_inc_dirs)
|
||||||
from setup.sfntly import SfntlyBuilderMixin
|
|
||||||
MT
|
MT
|
||||||
isunix = islinux or isosx or isbsd
|
isunix = islinux or isosx or isbsd
|
||||||
|
|
||||||
@ -63,26 +62,8 @@ if isosx:
|
|||||||
icu_libs = ['icucore']
|
icu_libs = ['icucore']
|
||||||
icu_cflags = ['-DU_DISABLE_RENAMING'] # Needed to use system libicucore.dylib
|
icu_cflags = ['-DU_DISABLE_RENAMING'] # Needed to use system libicucore.dylib
|
||||||
|
|
||||||
class SfntlyExtension(Extension, SfntlyBuilderMixin):
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
Extension.__init__(self, *args, **kwargs)
|
|
||||||
SfntlyBuilderMixin.__init__(self)
|
|
||||||
|
|
||||||
def preflight(self, *args, **kwargs):
|
|
||||||
self(*args, **kwargs)
|
|
||||||
|
|
||||||
extensions = [
|
extensions = [
|
||||||
|
|
||||||
SfntlyExtension('sfntly',
|
|
||||||
['calibre/utils/fonts/sfntly.cpp'],
|
|
||||||
headers= ['calibre/utils/fonts/sfntly.h'],
|
|
||||||
libraries=icu_libs,
|
|
||||||
lib_dirs=icu_lib_dirs,
|
|
||||||
inc_dirs=icu_inc_dirs,
|
|
||||||
cflags=icu_cflags
|
|
||||||
),
|
|
||||||
|
|
||||||
Extension('speedup',
|
Extension('speedup',
|
||||||
['calibre/utils/speedup.c'],
|
['calibre/utils/speedup.c'],
|
||||||
),
|
),
|
||||||
|
1270
setup/iso_639/ca.po
1270
setup/iso_639/ca.po
File diff suppressed because it is too large
Load Diff
@ -1,93 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
|
||||||
from __future__ import (unicode_literals, division, absolute_import,
|
|
||||||
print_function)
|
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
|
||||||
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
|
||||||
__docformat__ = 'restructuredtext en'
|
|
||||||
|
|
||||||
import shlex, os
|
|
||||||
from glob import glob
|
|
||||||
|
|
||||||
from setup import iswindows
|
|
||||||
|
|
||||||
class Group(object):
|
|
||||||
|
|
||||||
def __init__(self, name, base, build_base, cflags):
|
|
||||||
self.name = name
|
|
||||||
self.cflags = cflags
|
|
||||||
self.headers = frozenset(glob(os.path.join(base, '*.h')))
|
|
||||||
self.src_files = glob(os.path.join(base, '*.cc'))
|
|
||||||
self.bdir = os.path.abspath(os.path.join(build_base, name))
|
|
||||||
if not os.path.exists(self.bdir):
|
|
||||||
os.makedirs(self.bdir)
|
|
||||||
self.objects = [os.path.join(self.bdir,
|
|
||||||
os.path.basename(x).rpartition('.')[0] + ('.obj' if iswindows else
|
|
||||||
'.o')) for x in self.src_files]
|
|
||||||
|
|
||||||
def __call__(self, compiler, linker, builder, all_headers):
|
|
||||||
for src, obj in zip(self.src_files, self.objects):
|
|
||||||
if builder.newer(obj, [src] + list(all_headers)):
|
|
||||||
sinc = ['/Tp'+src] if iswindows else ['-c', src]
|
|
||||||
oinc = ['/Fo'+obj] if iswindows else ['-o', obj]
|
|
||||||
cmd = [compiler] + self.cflags + sinc + oinc
|
|
||||||
builder.info(' '.join(cmd))
|
|
||||||
builder.check_call(cmd)
|
|
||||||
|
|
||||||
class SfntlyBuilderMixin(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.sfntly_cflags = [
|
|
||||||
'-DSFNTLY_NO_EXCEPTION',
|
|
||||||
'-DSFNTLY_EXPERIMENTAL',
|
|
||||||
]
|
|
||||||
if iswindows:
|
|
||||||
self.sfntly_cflags += [
|
|
||||||
'-D_UNICODE', '-DUNICODE',
|
|
||||||
] + shlex.split('/W4 /WX /Gm- /Gy /GR-')
|
|
||||||
self.cflags += ['-DWIN32']
|
|
||||||
else:
|
|
||||||
# Possibly add -fno-inline (slower, but more robust)
|
|
||||||
self.sfntly_cflags += [
|
|
||||||
'-Werror',
|
|
||||||
'-fno-exceptions',
|
|
||||||
]
|
|
||||||
if len(self.libraries) > 1:
|
|
||||||
self.libraries = ['icuuc']
|
|
||||||
if not iswindows:
|
|
||||||
self.libraries += ['pthread']
|
|
||||||
|
|
||||||
def __call__(self, obj_dir, compiler, linker, builder, cflags, ldflags):
|
|
||||||
self.sfntly_build_dir = os.path.join(obj_dir, 'sfntly')
|
|
||||||
if '/Ox' in cflags:
|
|
||||||
cflags.remove('/Ox')
|
|
||||||
if '-O3' in cflags:
|
|
||||||
cflags.remove('-O3')
|
|
||||||
if '/W3' in cflags:
|
|
||||||
cflags.remove('/W3')
|
|
||||||
if '-ggdb' not in cflags:
|
|
||||||
cflags.insert(0, '/O2' if iswindows else '-O2')
|
|
||||||
|
|
||||||
groups = []
|
|
||||||
all_headers = set()
|
|
||||||
all_objects = []
|
|
||||||
src_dir = self.absolutize([os.path.join('sfntly', 'src')])[0]
|
|
||||||
inc_dirs = [src_dir]
|
|
||||||
self.inc_dirs += inc_dirs
|
|
||||||
inc_flags = builder.inc_dirs_to_cflags(self.inc_dirs)
|
|
||||||
for loc in ('', 'port', 'data', 'math', 'table', 'table/bitmap',
|
|
||||||
'table/core', 'table/truetype'):
|
|
||||||
path = os.path.join(src_dir, 'sfntly', *loc.split('/'))
|
|
||||||
gr = Group(loc, path, self.sfntly_build_dir, cflags+
|
|
||||||
inc_flags+self.sfntly_cflags+self.cflags)
|
|
||||||
groups.append(gr)
|
|
||||||
all_headers |= gr.headers
|
|
||||||
all_objects.extend(gr.objects)
|
|
||||||
|
|
||||||
for group in groups:
|
|
||||||
group(compiler, linker, builder, all_headers)
|
|
||||||
|
|
||||||
self.extra_objs = all_objects
|
|
||||||
|
|
||||||
|
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
__appname__ = u'calibre'
|
__appname__ = u'calibre'
|
||||||
numeric_version = (0, 9, 5)
|
numeric_version = (0, 9, 6)
|
||||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||||
|
|
||||||
@ -91,7 +91,6 @@ class Plugins(collections.Mapping):
|
|||||||
'speedup',
|
'speedup',
|
||||||
'freetype',
|
'freetype',
|
||||||
'woff',
|
'woff',
|
||||||
'sfntly',
|
|
||||||
]
|
]
|
||||||
if iswindows:
|
if iswindows:
|
||||||
plugins.extend(['winutil', 'wpd', 'winfonts'])
|
plugins.extend(['winutil', 'wpd', 'winfonts'])
|
||||||
|
@ -212,7 +212,7 @@ def main(args=sys.argv):
|
|||||||
return
|
return
|
||||||
|
|
||||||
if len(args) > 1 and args[1] in ('-f', '--subset-font'):
|
if len(args) > 1 and args[1] in ('-f', '--subset-font'):
|
||||||
from calibre.utils.fonts.subset import main
|
from calibre.utils.fonts.sfnt.subset import main
|
||||||
main(['subset-font']+args[2:])
|
main(['subset-font']+args[2:])
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ from calibre.ebooks.metadata import author_to_author_sort
|
|||||||
class Book(Book_):
|
class Book(Book_):
|
||||||
|
|
||||||
def __init__(self, prefix, lpath, title=None, authors=None, mime=None, date=None, ContentType=None,
|
def __init__(self, prefix, lpath, title=None, authors=None, mime=None, date=None, ContentType=None,
|
||||||
thumbnail_name=None, size=0, other=None):
|
thumbnail_name=None, size=None, other=None):
|
||||||
# debug_print('Book::__init__ - title=', title)
|
# debug_print('Book::__init__ - title=', title)
|
||||||
show_debug = title is not None and title.lower().find("xxxxx") >= 0
|
show_debug = title is not None and title.lower().find("xxxxx") >= 0
|
||||||
if show_debug:
|
if show_debug:
|
||||||
@ -57,7 +57,7 @@ class Book(Book_):
|
|||||||
except:
|
except:
|
||||||
self.datetime = time.gmtime()
|
self.datetime = time.gmtime()
|
||||||
|
|
||||||
self.contentID = None
|
self.contentID = None
|
||||||
self.current_shelves = []
|
self.current_shelves = []
|
||||||
self.kobo_collections = []
|
self.kobo_collections = []
|
||||||
|
|
||||||
@ -65,7 +65,8 @@ class Book(Book_):
|
|||||||
self.thumbnail = ImageWrapper(thumbnail_name)
|
self.thumbnail = ImageWrapper(thumbnail_name)
|
||||||
|
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print("Book::__init__ - self=", self)
|
debug_print("Book::__init__ end - self=", self)
|
||||||
|
debug_print("Book::__init__ end - title=", title, 'authors=', authors)
|
||||||
|
|
||||||
|
|
||||||
class ImageWrapper(object):
|
class ImageWrapper(object):
|
||||||
|
@ -517,7 +517,7 @@ class KOBO(USBMS):
|
|||||||
lpath = lpath[1:]
|
lpath = lpath[1:]
|
||||||
#print "path: " + lpath
|
#print "path: " + lpath
|
||||||
book = self.book_class(prefix, lpath, other=info)
|
book = self.book_class(prefix, lpath, other=info)
|
||||||
if book.size is None:
|
if book.size is None or book.size == 0:
|
||||||
book.size = os.stat(self.normalize_path(path)).st_size
|
book.size = os.stat(self.normalize_path(path)).st_size
|
||||||
b = booklists[blist].add_book(book, replace_metadata=True)
|
b = booklists[blist].add_book(book, replace_metadata=True)
|
||||||
if b:
|
if b:
|
||||||
@ -667,6 +667,7 @@ class KOBO(USBMS):
|
|||||||
[_('Unknown')])
|
[_('Unknown')])
|
||||||
size = os.stat(cls.normalize_path(os.path.join(prefix, lpath))).st_size
|
size = os.stat(cls.normalize_path(os.path.join(prefix, lpath))).st_size
|
||||||
book = cls.book_class(prefix, lpath, title, authors, mime, date, ContentType, ImageID, size=size, other=mi)
|
book = cls.book_class(prefix, lpath, title, authors, mime, date, ContentType, ImageID, size=size, other=mi)
|
||||||
|
|
||||||
return book
|
return book
|
||||||
|
|
||||||
def get_device_paths(self):
|
def get_device_paths(self):
|
||||||
@ -1430,6 +1431,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
idx = bl_cache.get(lpath, None)
|
idx = bl_cache.get(lpath, None)
|
||||||
if idx is not None:# and not (accessibility == 1 and isdownloaded == 'false'):
|
if idx is not None:# and not (accessibility == 1 and isdownloaded == 'false'):
|
||||||
if show_debug:
|
if show_debug:
|
||||||
|
self.debug_index = idx
|
||||||
debug_print("KoboTouch:update_booklist - idx=%d"%idx)
|
debug_print("KoboTouch:update_booklist - idx=%d"%idx)
|
||||||
debug_print('KoboTouch:update_booklist - bl[idx].device_collections=', bl[idx].device_collections)
|
debug_print('KoboTouch:update_booklist - bl[idx].device_collections=', bl[idx].device_collections)
|
||||||
debug_print('KoboTouch:update_booklist - playlist_map=', playlist_map)
|
debug_print('KoboTouch:update_booklist - playlist_map=', playlist_map)
|
||||||
@ -1464,13 +1466,13 @@ class KOBOTOUCH(KOBO):
|
|||||||
bl[idx].device_collections = playlist_map.get(lpath,[])
|
bl[idx].device_collections = playlist_map.get(lpath,[])
|
||||||
bl[idx].current_shelves = bookshelves
|
bl[idx].current_shelves = bookshelves
|
||||||
bl[idx].kobo_collections = kobo_collections
|
bl[idx].kobo_collections = kobo_collections
|
||||||
changed = True
|
|
||||||
|
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print('KoboTouch:update_booklist - updated bl[idx].device_collections=', bl[idx].device_collections)
|
debug_print('KoboTouch:update_booklist - updated bl[idx].device_collections=', bl[idx].device_collections)
|
||||||
debug_print('KoboTouch:update_booklist - playlist_map=', playlist_map, 'changed=', changed)
|
debug_print('KoboTouch:update_booklist - playlist_map=', playlist_map, 'changed=', changed)
|
||||||
# debug_print('KoboTouch:update_booklist - book=', bl[idx])
|
# debug_print('KoboTouch:update_booklist - book=', bl[idx])
|
||||||
debug_print("KoboTouch:update_booklist - book class=%s"%bl[idx].__class__)
|
debug_print("KoboTouch:update_booklist - book class=%s"%bl[idx].__class__)
|
||||||
|
debug_print("KoboTouch:update_booklist - book title=%s"%bl[idx].title)
|
||||||
else:
|
else:
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print('KoboTouch:update_booklist - idx is none')
|
debug_print('KoboTouch:update_booklist - idx is none')
|
||||||
@ -1494,7 +1496,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print('KoboTouch:update_booklist - class:', book.__class__)
|
debug_print('KoboTouch:update_booklist - class:', book.__class__)
|
||||||
# debug_print(' resolution:', book.__class__.__mro__)
|
# debug_print(' resolution:', book.__class__.__mro__)
|
||||||
debug_print(" contentid:'%s'"%book.contentID)
|
debug_print(" contentid: '%s'"%book.contentID)
|
||||||
debug_print(" title:'%s'"%book.title)
|
debug_print(" title:'%s'"%book.title)
|
||||||
debug_print(" the book:", book)
|
debug_print(" the book:", book)
|
||||||
debug_print(" author_sort:'%s'"%book.author_sort)
|
debug_print(" author_sort:'%s'"%book.author_sort)
|
||||||
@ -1512,6 +1514,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
changed = True
|
changed = True
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print(' book.device_collections', book.device_collections)
|
debug_print(' book.device_collections', book.device_collections)
|
||||||
|
debug_print(' book.title', book.title)
|
||||||
except: # Probably a path encoding error
|
except: # Probably a path encoding error
|
||||||
import traceback
|
import traceback
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
@ -1534,6 +1537,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
# debug_print("KoboTouch:get_bookshelvesforbook - count bookshelves=" + unicode(count_bookshelves))
|
# debug_print("KoboTouch:get_bookshelvesforbook - count bookshelves=" + unicode(count_bookshelves))
|
||||||
return bookshelves
|
return bookshelves
|
||||||
|
|
||||||
|
self.debug_index = 0
|
||||||
import sqlite3 as sqlite
|
import sqlite3 as sqlite
|
||||||
with closing(sqlite.connect(
|
with closing(sqlite.connect(
|
||||||
self.normalize_path(self._main_prefix +
|
self.normalize_path(self._main_prefix +
|
||||||
@ -1635,8 +1639,11 @@ class KOBOTOUCH(KOBO):
|
|||||||
# Do the operation in reverse order so indices remain valid
|
# Do the operation in reverse order so indices remain valid
|
||||||
for idx in sorted(bl_cache.itervalues(), reverse=True):
|
for idx in sorted(bl_cache.itervalues(), reverse=True):
|
||||||
if idx is not None:
|
if idx is not None:
|
||||||
need_sync = True
|
if not os.path.exists(self.normalize_path(os.path.join(prefix, bl[idx].lpath))):
|
||||||
del bl[idx]
|
need_sync = True
|
||||||
|
del bl[idx]
|
||||||
|
# else:
|
||||||
|
# debug_print("KoboTouch:books - Book in mtadata.calibre, on file system but not database - bl[idx].title:'%s'"%bl[idx].title)
|
||||||
|
|
||||||
#print "count found in cache: %d, count of files in metadata: %d, need_sync: %s" % \
|
#print "count found in cache: %d, count of files in metadata: %d, need_sync: %s" % \
|
||||||
# (len(bl_cache), len(bl), need_sync)
|
# (len(bl_cache), len(bl), need_sync)
|
||||||
@ -1650,6 +1657,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
USBMS.sync_booklists(self, (None, bl, None))
|
USBMS.sync_booklists(self, (None, bl, None))
|
||||||
else:
|
else:
|
||||||
USBMS.sync_booklists(self, (bl, None, None))
|
USBMS.sync_booklists(self, (bl, None, None))
|
||||||
|
debug_print("KoboTouch:books - have done sync_booklists")
|
||||||
|
|
||||||
self.report_progress(1.0, _('Getting list of books on device...'))
|
self.report_progress(1.0, _('Getting list of books on device...'))
|
||||||
debug_print("KoboTouch:books - end - oncard='%s'"%oncard)
|
debug_print("KoboTouch:books - end - oncard='%s'"%oncard)
|
||||||
@ -1894,7 +1902,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
# debug_print("KoboTouch:update_device_database_collections - self.bookshelvelist=", self.bookshelvelist)
|
# debug_print("KoboTouch:update_device_database_collections - self.bookshelvelist=", self.bookshelvelist)
|
||||||
# Process any collections that exist
|
# Process any collections that exist
|
||||||
for category, books in collections.items():
|
for category, books in collections.items():
|
||||||
debug_print("KoboTouch:update_device_database_collections - category='%s'"%category)
|
debug_print("KoboTouch:update_device_database_collections - category='%s' books=%d"%(category, len(books)))
|
||||||
if create_bookshelves and not (category in supportedcategories or category in readstatuslist or category in accessibilitylist):
|
if create_bookshelves and not (category in supportedcategories or category in readstatuslist or category in accessibilitylist):
|
||||||
self.check_for_bookshelf(connection, category)
|
self.check_for_bookshelf(connection, category)
|
||||||
# if category in self.bookshelvelist:
|
# if category in self.bookshelvelist:
|
||||||
@ -1906,9 +1914,11 @@ class KOBOTOUCH(KOBO):
|
|||||||
debug_print(' Title="%s"'%book.title, 'category="%s"'%category)
|
debug_print(' Title="%s"'%book.title, 'category="%s"'%category)
|
||||||
# debug_print(book)
|
# debug_print(book)
|
||||||
debug_print(' class=%s'%book.__class__)
|
debug_print(' class=%s'%book.__class__)
|
||||||
# debug_print(' resolution:', book.__class__.__mro__)
|
|
||||||
# debug_print(' subclasses:', book.__class__.__subclasses__())
|
|
||||||
debug_print(' book.contentID="%s"'%book.contentID)
|
debug_print(' book.contentID="%s"'%book.contentID)
|
||||||
|
debug_print(' book.application_id="%s"'%book.application_id)
|
||||||
|
|
||||||
|
if book.application_id is None:
|
||||||
|
continue
|
||||||
|
|
||||||
category_added = False
|
category_added = False
|
||||||
|
|
||||||
@ -1924,7 +1934,7 @@ class KOBOTOUCH(KOBO):
|
|||||||
if category not in book.device_collections:
|
if category not in book.device_collections:
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print(' Setting bookshelf on device')
|
debug_print(' Setting bookshelf on device')
|
||||||
self.set_bookshelf(connection, book.contentID, category)
|
self.set_bookshelf(connection, book, category)
|
||||||
category_added = True
|
category_added = True
|
||||||
elif category in readstatuslist.keys():
|
elif category in readstatuslist.keys():
|
||||||
# Manage ReadStatus
|
# Manage ReadStatus
|
||||||
@ -1956,12 +1966,10 @@ class KOBOTOUCH(KOBO):
|
|||||||
else: # No collections
|
else: # No collections
|
||||||
# Since no collections exist the ReadStatus needs to be reset to 0 (Unread)
|
# Since no collections exist the ReadStatus needs to be reset to 0 (Unread)
|
||||||
debug_print("No Collections - reseting ReadStatus")
|
debug_print("No Collections - reseting ReadStatus")
|
||||||
if oncard == 'carda':
|
|
||||||
debug_print("Booklists=", booklists)
|
|
||||||
if self.dbversion < 53:
|
if self.dbversion < 53:
|
||||||
self.reset_readstatus(connection, oncard)
|
self.reset_readstatus(connection, oncard)
|
||||||
if self.dbversion >= 14:
|
if self.dbversion >= 14:
|
||||||
debug_print("No Collections - reseting FavouritesIndex")
|
debug_print("No Collections - resetting FavouritesIndex")
|
||||||
self.reset_favouritesindex(connection, oncard)
|
self.reset_favouritesindex(connection, oncard)
|
||||||
|
|
||||||
if self.supports_bookshelves():
|
if self.supports_bookshelves():
|
||||||
@ -2189,16 +2197,23 @@ class KOBOTOUCH(KOBO):
|
|||||||
|
|
||||||
return bookshelves
|
return bookshelves
|
||||||
|
|
||||||
def set_bookshelf(self, connection, ContentID, bookshelf):
|
def set_bookshelf(self, connection, book, shelfName):
|
||||||
show_debug = self.is_debugging_title(ContentID)
|
show_debug = self.is_debugging_title(book.title)
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print('KoboTouch:set_bookshelf ContentID=' + ContentID)
|
debug_print('KoboTouch:set_bookshelf book.ContentID="%s"'%book.contentID)
|
||||||
test_query = 'SELECT 1 FROM ShelfContent WHERE ShelfName = ? and ContentId = ?'
|
debug_print('KoboTouch:set_bookshelf book.current_shelves="%s"'%book.current_shelves)
|
||||||
test_values = (bookshelf, ContentID, )
|
|
||||||
|
if shelfName in book.current_shelves:
|
||||||
|
if show_debug:
|
||||||
|
debug_print(' book already on shelf.')
|
||||||
|
return
|
||||||
|
|
||||||
|
test_query = 'SELECT _IsDeleted FROM ShelfContent WHERE ShelfName = ? and ContentId = ?'
|
||||||
|
test_values = (shelfName, book.contentID, )
|
||||||
addquery = 'INSERT INTO ShelfContent ("ShelfName","ContentId","DateModified","_IsDeleted","_IsSynced") VALUES (?, ?, ?, "false", "false")'
|
addquery = 'INSERT INTO ShelfContent ("ShelfName","ContentId","DateModified","_IsDeleted","_IsSynced") VALUES (?, ?, ?, "false", "false")'
|
||||||
add_values = (bookshelf, ContentID, time.strftime(self.TIMESTAMP_STRING, time.gmtime()), )
|
add_values = (shelfName, book.contentID, time.strftime(self.TIMESTAMP_STRING, time.gmtime()), )
|
||||||
updatequery = 'UPDATE ShelfContent SET _IsDeleted = "false" WHERE ShelfName = ? and ContentId = ?'
|
updatequery = 'UPDATE ShelfContent SET _IsDeleted = "false" WHERE ShelfName = ? and ContentId = ?'
|
||||||
update_values = (bookshelf, ContentID, )
|
update_values = (shelfName, book.contentID, )
|
||||||
|
|
||||||
cursor = connection.cursor()
|
cursor = connection.cursor()
|
||||||
cursor.execute(test_query, test_values)
|
cursor.execute(test_query, test_values)
|
||||||
@ -2208,9 +2223,9 @@ class KOBOTOUCH(KOBO):
|
|||||||
debug_print(' Did not find a record - adding')
|
debug_print(' Did not find a record - adding')
|
||||||
cursor.execute(addquery, add_values)
|
cursor.execute(addquery, add_values)
|
||||||
connection.commit()
|
connection.commit()
|
||||||
else:
|
elif result[0] == 'true':
|
||||||
if show_debug:
|
if show_debug:
|
||||||
debug_print(' Found a record - updating')
|
debug_print(' Found a record - updating - result=', result)
|
||||||
cursor.execute(updatequery, update_values)
|
cursor.execute(updatequery, update_values)
|
||||||
connection.commit()
|
connection.commit()
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@ import os
|
|||||||
|
|
||||||
import cStringIO
|
import cStringIO
|
||||||
|
|
||||||
|
from calibre.constants import isosx
|
||||||
from calibre.devices.usbms.driver import USBMS
|
from calibre.devices.usbms.driver import USBMS
|
||||||
|
|
||||||
class NOOK(USBMS):
|
class NOOK(USBMS):
|
||||||
@ -84,6 +85,8 @@ class NOOK_COLOR(NOOK):
|
|||||||
description = _('Communicate with the Nook Color, TSR and Tablet eBook readers.')
|
description = _('Communicate with the Nook Color, TSR and Tablet eBook readers.')
|
||||||
|
|
||||||
PRODUCT_ID = [0x002, 0x003, 0x004]
|
PRODUCT_ID = [0x002, 0x003, 0x004]
|
||||||
|
if isosx:
|
||||||
|
PRODUCT_ID.append(0x005) # Nook HD+
|
||||||
BCD = [0x216]
|
BCD = [0x216]
|
||||||
|
|
||||||
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['EBOOK_DISK', 'NOOK_TABLET',
|
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['EBOOK_DISK', 'NOOK_TABLET',
|
||||||
|
@ -14,6 +14,7 @@ device. This class handles device detection.
|
|||||||
import os, subprocess, time, re, sys, glob
|
import os, subprocess, time, re, sys, glob
|
||||||
from itertools import repeat
|
from itertools import repeat
|
||||||
|
|
||||||
|
from calibre import prints, as_unicode
|
||||||
from calibre.devices.interface import DevicePlugin
|
from calibre.devices.interface import DevicePlugin
|
||||||
from calibre.devices.errors import DeviceError
|
from calibre.devices.errors import DeviceError
|
||||||
from calibre.devices.usbms.deviceconfig import DeviceConfig
|
from calibre.devices.usbms.deviceconfig import DeviceConfig
|
||||||
@ -901,8 +902,11 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
for d in drives:
|
for d in drives:
|
||||||
try:
|
try:
|
||||||
winutil.eject_drive(bytes(d)[0])
|
winutil.eject_drive(bytes(d)[0])
|
||||||
except:
|
except Exception as e:
|
||||||
pass
|
try:
|
||||||
|
prints("Eject failed:", as_unicode(e))
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
t = Thread(target=do_it, args=[drives])
|
t = Thread(target=do_it, args=[drives])
|
||||||
t.daemon = True
|
t.daemon = True
|
||||||
|
@ -133,6 +133,7 @@ def add_pipeline_options(parser, plumber):
|
|||||||
[
|
[
|
||||||
'base_font_size', 'disable_font_rescaling',
|
'base_font_size', 'disable_font_rescaling',
|
||||||
'font_size_mapping', 'embed_font_family',
|
'font_size_mapping', 'embed_font_family',
|
||||||
|
'subset_embedded_fonts',
|
||||||
'line_height', 'minimum_line_height',
|
'line_height', 'minimum_line_height',
|
||||||
'linearize_tables',
|
'linearize_tables',
|
||||||
'extra_css', 'filter_css',
|
'extra_css', 'filter_css',
|
||||||
|
@ -150,8 +150,15 @@ class EPUBInput(InputFormatPlugin):
|
|||||||
from calibre import walk
|
from calibre import walk
|
||||||
from calibre.ebooks import DRMError
|
from calibre.ebooks import DRMError
|
||||||
from calibre.ebooks.metadata.opf2 import OPF
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
zf = ZipFile(stream)
|
try:
|
||||||
zf.extractall(os.getcwdu())
|
zf = ZipFile(stream)
|
||||||
|
zf.extractall(os.getcwdu())
|
||||||
|
except:
|
||||||
|
log.exception('EPUB appears to be invalid ZIP file, trying a'
|
||||||
|
' more forgiving ZIP parser')
|
||||||
|
from calibre.utils.localunzip import extractall
|
||||||
|
stream.seek(0)
|
||||||
|
extractall(stream)
|
||||||
encfile = os.path.abspath(os.path.join('META-INF', 'encryption.xml'))
|
encfile = os.path.abspath(os.path.join('META-INF', 'encryption.xml'))
|
||||||
opf = self.find_opf()
|
opf = self.find_opf()
|
||||||
if opf is None:
|
if opf is None:
|
||||||
|
@ -144,6 +144,22 @@ class EPUBOutput(OutputFormatPlugin):
|
|||||||
for u in XPath('//h:u')(root):
|
for u in XPath('//h:u')(root):
|
||||||
u.tag = 'span'
|
u.tag = 'span'
|
||||||
u.set('style', 'text-decoration:underline')
|
u.set('style', 'text-decoration:underline')
|
||||||
|
|
||||||
|
seen_ids, seen_names = set(), set()
|
||||||
|
for x in XPath('//*[@id or @name]')(root):
|
||||||
|
eid, name = x.get('id', None), x.get('name', None)
|
||||||
|
if eid:
|
||||||
|
if eid in seen_ids:
|
||||||
|
del x.attrib['id']
|
||||||
|
else:
|
||||||
|
seen_ids.add(eid)
|
||||||
|
if name:
|
||||||
|
if name in seen_names:
|
||||||
|
del x.attrib['name']
|
||||||
|
else:
|
||||||
|
seen_names.add(name)
|
||||||
|
|
||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
def convert(self, oeb, output_path, input_plugin, opts, log):
|
def convert(self, oeb, output_path, input_plugin, opts, log):
|
||||||
|
@ -204,6 +204,15 @@ OptionRecommendation(name='embed_font_family',
|
|||||||
'with some output formats, principally EPUB and AZW3.')
|
'with some output formats, principally EPUB and AZW3.')
|
||||||
),
|
),
|
||||||
|
|
||||||
|
OptionRecommendation(name='subset_embedded_fonts',
|
||||||
|
recommended_value=False, level=OptionRecommendation.LOW,
|
||||||
|
help=_(
|
||||||
|
'Subset all embedded fonts. Every embedded font is reduced '
|
||||||
|
'to contain only the glyphs used in this document. This decreases '
|
||||||
|
'the size of the font files. Useful if you are embedding a '
|
||||||
|
'particularly large font with lots of unused glyphs.')
|
||||||
|
),
|
||||||
|
|
||||||
OptionRecommendation(name='linearize_tables',
|
OptionRecommendation(name='linearize_tables',
|
||||||
recommended_value=False, level=OptionRecommendation.LOW,
|
recommended_value=False, level=OptionRecommendation.LOW,
|
||||||
help=_('Some badly designed documents use tables to control the '
|
help=_('Some badly designed documents use tables to control the '
|
||||||
@ -1112,6 +1121,10 @@ OptionRecommendation(name='search_replace',
|
|||||||
RemoveFakeMargins()(self.oeb, self.log, self.opts)
|
RemoveFakeMargins()(self.oeb, self.log, self.opts)
|
||||||
RemoveAdobeMargins()(self.oeb, self.log, self.opts)
|
RemoveAdobeMargins()(self.oeb, self.log, self.opts)
|
||||||
|
|
||||||
|
if self.opts.subset_embedded_fonts:
|
||||||
|
from calibre.ebooks.oeb.transforms.subset import SubsetFonts
|
||||||
|
SubsetFonts()(self.oeb, self.log, self.opts)
|
||||||
|
|
||||||
pr(0.9)
|
pr(0.9)
|
||||||
self.flush()
|
self.flush()
|
||||||
|
|
||||||
|
@ -10,6 +10,7 @@ from cStringIO import StringIO
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
|
|
||||||
from calibre.utils.zipfile import ZipFile, BadZipfile, safe_replace
|
from calibre.utils.zipfile import ZipFile, BadZipfile, safe_replace
|
||||||
|
from calibre.utils.localunzip import LocalZipFile
|
||||||
from calibre.ebooks.BeautifulSoup import BeautifulStoneSoup
|
from calibre.ebooks.BeautifulSoup import BeautifulStoneSoup
|
||||||
from calibre.ebooks.metadata import MetaInformation
|
from calibre.ebooks.metadata import MetaInformation
|
||||||
from calibre.ebooks.metadata.opf2 import OPF
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
@ -105,10 +106,13 @@ class OCFReader(OCF):
|
|||||||
|
|
||||||
class OCFZipReader(OCFReader):
|
class OCFZipReader(OCFReader):
|
||||||
def __init__(self, stream, mode='r', root=None):
|
def __init__(self, stream, mode='r', root=None):
|
||||||
try:
|
if isinstance(stream, (LocalZipFile, ZipFile)):
|
||||||
self.archive = ZipFile(stream, mode=mode)
|
self.archive = stream
|
||||||
except BadZipfile:
|
else:
|
||||||
raise EPubException("not a ZIP .epub OCF container")
|
try:
|
||||||
|
self.archive = ZipFile(stream, mode=mode)
|
||||||
|
except BadZipfile:
|
||||||
|
raise EPubException("not a ZIP .epub OCF container")
|
||||||
self.root = root
|
self.root = root
|
||||||
if self.root is None:
|
if self.root is None:
|
||||||
name = getattr(stream, 'name', False)
|
name = getattr(stream, 'name', False)
|
||||||
@ -119,8 +123,18 @@ class OCFZipReader(OCFReader):
|
|||||||
super(OCFZipReader, self).__init__()
|
super(OCFZipReader, self).__init__()
|
||||||
|
|
||||||
def open(self, name, mode='r'):
|
def open(self, name, mode='r'):
|
||||||
|
if isinstance(self.archive, LocalZipFile):
|
||||||
|
return self.archive.open(name)
|
||||||
return StringIO(self.archive.read(name))
|
return StringIO(self.archive.read(name))
|
||||||
|
|
||||||
|
def get_zip_reader(stream, root=None):
|
||||||
|
try:
|
||||||
|
zf = ZipFile(stream, mode='r')
|
||||||
|
except:
|
||||||
|
stream.seek(0)
|
||||||
|
zf = LocalZipFile(stream)
|
||||||
|
return OCFZipReader(zf, root=root)
|
||||||
|
|
||||||
class OCFDirReader(OCFReader):
|
class OCFDirReader(OCFReader):
|
||||||
def __init__(self, path):
|
def __init__(self, path):
|
||||||
self.root = path
|
self.root = path
|
||||||
@ -184,7 +198,12 @@ def render_cover(opf, opf_path, zf, reader=None):
|
|||||||
def get_cover(opf, opf_path, stream, reader=None):
|
def get_cover(opf, opf_path, stream, reader=None):
|
||||||
raster_cover = opf.raster_cover
|
raster_cover = opf.raster_cover
|
||||||
stream.seek(0)
|
stream.seek(0)
|
||||||
zf = ZipFile(stream)
|
try:
|
||||||
|
zf = ZipFile(stream)
|
||||||
|
except:
|
||||||
|
stream.seek(0)
|
||||||
|
zf = LocalZipFile(stream)
|
||||||
|
|
||||||
if raster_cover:
|
if raster_cover:
|
||||||
base = posixpath.dirname(opf_path)
|
base = posixpath.dirname(opf_path)
|
||||||
cpath = posixpath.normpath(posixpath.join(base, raster_cover))
|
cpath = posixpath.normpath(posixpath.join(base, raster_cover))
|
||||||
@ -207,7 +226,7 @@ def get_cover(opf, opf_path, stream, reader=None):
|
|||||||
def get_metadata(stream, extract_cover=True):
|
def get_metadata(stream, extract_cover=True):
|
||||||
""" Return metadata as a :class:`Metadata` object """
|
""" Return metadata as a :class:`Metadata` object """
|
||||||
stream.seek(0)
|
stream.seek(0)
|
||||||
reader = OCFZipReader(stream)
|
reader = get_zip_reader(stream)
|
||||||
mi = reader.opf.to_book_metadata()
|
mi = reader.opf.to_book_metadata()
|
||||||
if extract_cover:
|
if extract_cover:
|
||||||
try:
|
try:
|
||||||
@ -232,7 +251,7 @@ def _write_new_cover(new_cdata, cpath):
|
|||||||
|
|
||||||
def set_metadata(stream, mi, apply_null=False, update_timestamp=False):
|
def set_metadata(stream, mi, apply_null=False, update_timestamp=False):
|
||||||
stream.seek(0)
|
stream.seek(0)
|
||||||
reader = OCFZipReader(stream, root=os.getcwdu())
|
reader = get_zip_reader(stream, root=os.getcwdu())
|
||||||
raster_cover = reader.opf.raster_cover
|
raster_cover = reader.opf.raster_cover
|
||||||
mi = MetaInformation(mi)
|
mi = MetaInformation(mi)
|
||||||
new_cdata = None
|
new_cdata = None
|
||||||
@ -283,7 +302,11 @@ def set_metadata(stream, mi, apply_null=False, update_timestamp=False):
|
|||||||
reader.opf.timestamp = mi.timestamp
|
reader.opf.timestamp = mi.timestamp
|
||||||
|
|
||||||
newopf = StringIO(reader.opf.render())
|
newopf = StringIO(reader.opf.render())
|
||||||
safe_replace(stream, reader.container[OPF.MIMETYPE], newopf,
|
if isinstance(reader.archive, LocalZipFile):
|
||||||
|
reader.archive.safe_replace(reader.container[OPF.MIMETYPE], newopf,
|
||||||
|
extra_replacements=replacements)
|
||||||
|
else:
|
||||||
|
safe_replace(stream, reader.container[OPF.MIMETYPE], newopf,
|
||||||
extra_replacements=replacements)
|
extra_replacements=replacements)
|
||||||
try:
|
try:
|
||||||
if cpath is not None:
|
if cpath is not None:
|
||||||
|
284
src/calibre/ebooks/oeb/transforms/subset.py
Normal file
284
src/calibre/ebooks/oeb/transforms/subset.py
Normal file
@ -0,0 +1,284 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
from collections import defaultdict
|
||||||
|
|
||||||
|
from calibre.ebooks.oeb.base import urlnormalize
|
||||||
|
from calibre.utils.fonts.sfnt.subset import subset, NoGlyphs, UnsupportedFont
|
||||||
|
|
||||||
|
class SubsetFonts(object):
|
||||||
|
|
||||||
|
'''
|
||||||
|
Subset all embedded fonts. Must be run after CSS flattening, as it requires
|
||||||
|
CSS normalization and flattening to work.
|
||||||
|
'''
|
||||||
|
|
||||||
|
def __call__(self, oeb, log, opts):
|
||||||
|
self.oeb, self.log, self.opts = oeb, log, opts
|
||||||
|
|
||||||
|
self.find_embedded_fonts()
|
||||||
|
if not self.embedded_fonts:
|
||||||
|
self.log.debug('No embedded fonts found')
|
||||||
|
return
|
||||||
|
self.find_style_rules()
|
||||||
|
self.find_font_usage()
|
||||||
|
|
||||||
|
totals = [0, 0]
|
||||||
|
|
||||||
|
def remove(font):
|
||||||
|
totals[1] += len(font['item'].data)
|
||||||
|
self.oeb.manifest.remove(font['item'])
|
||||||
|
font['rule'].parentStyleSheet.deleteRule(font['rule'])
|
||||||
|
|
||||||
|
for font in self.embedded_fonts:
|
||||||
|
if not font['chars']:
|
||||||
|
self.log('The font %s is unused. Removing it.'%font['src'])
|
||||||
|
remove(font)
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
raw, old_stats, new_stats = subset(font['item'].data, font['chars'])
|
||||||
|
except NoGlyphs:
|
||||||
|
self.log('The font %s has no used glyphs. Removing it.'%font['src'])
|
||||||
|
remove(font)
|
||||||
|
continue
|
||||||
|
except UnsupportedFont as e:
|
||||||
|
self.log.warn('The font %s is unsupported for subsetting. %s'%(
|
||||||
|
font['src'], e))
|
||||||
|
sz = len(font['item'].data)
|
||||||
|
totals[0] += sz
|
||||||
|
totals[1] += sz
|
||||||
|
else:
|
||||||
|
font['item'].data = raw
|
||||||
|
nlen = sum(new_stats.itervalues())
|
||||||
|
olen = sum(old_stats.itervalues())
|
||||||
|
self.log('Decreased the font %s to %.1f%% of its original size'%
|
||||||
|
(font['src'], nlen/olen *100))
|
||||||
|
totals[0] += nlen
|
||||||
|
totals[1] += olen
|
||||||
|
|
||||||
|
font['item'].unload_data_from_memory()
|
||||||
|
|
||||||
|
if totals[0]:
|
||||||
|
self.log('Reduced total font size to %.1f%% of original'%
|
||||||
|
(totals[0]/totals[1] * 100))
|
||||||
|
|
||||||
|
def get_font_properties(self, rule, default=None):
|
||||||
|
'''
|
||||||
|
Given a CSS rule, extract normalized font properties from
|
||||||
|
it. Note that shorthand font property should already have been expanded
|
||||||
|
by the CSS flattening code.
|
||||||
|
'''
|
||||||
|
props = {}
|
||||||
|
s = rule.style
|
||||||
|
for q in ('font-family', 'src', 'font-weight', 'font-stretch',
|
||||||
|
'font-style'):
|
||||||
|
g = 'uri' if q == 'src' else 'value'
|
||||||
|
try:
|
||||||
|
val = s.getProperty(q).propertyValue[0]
|
||||||
|
val = getattr(val, g)
|
||||||
|
if q == 'font-family':
|
||||||
|
val = [x.value for x in s.getProperty(q).propertyValue]
|
||||||
|
if val and val[0] == 'inherit':
|
||||||
|
val = None
|
||||||
|
except (IndexError, KeyError, AttributeError, TypeError, ValueError):
|
||||||
|
val = None if q in {'src', 'font-family'} else default
|
||||||
|
if q in {'font-weight', 'font-stretch', 'font-style'}:
|
||||||
|
val = unicode(val).lower() if (val or val == 0) else val
|
||||||
|
if val == 'inherit':
|
||||||
|
val = default
|
||||||
|
if q == 'font-weight':
|
||||||
|
val = {'normal':'400', 'bold':'700'}.get(val, val)
|
||||||
|
if val not in {'100', '200', '300', '400', '500', '600', '700',
|
||||||
|
'800', '900', 'bolder', 'lighter'}:
|
||||||
|
val = default
|
||||||
|
if val == 'normal': val = '400'
|
||||||
|
elif q == 'font-style':
|
||||||
|
if val not in {'normal', 'italic', 'oblique'}:
|
||||||
|
val = default
|
||||||
|
elif q == 'font-stretch':
|
||||||
|
if val not in { 'normal', 'ultra-condensed', 'extra-condensed',
|
||||||
|
'condensed', 'semi-condensed', 'semi-expanded',
|
||||||
|
'expanded', 'extra-expanded', 'ultra-expanded'}:
|
||||||
|
val = default
|
||||||
|
props[q] = val
|
||||||
|
return props
|
||||||
|
|
||||||
|
def find_embedded_fonts(self):
|
||||||
|
'''
|
||||||
|
Find all @font-face rules and extract the relevant info from them.
|
||||||
|
'''
|
||||||
|
self.embedded_fonts = []
|
||||||
|
for item in self.oeb.manifest:
|
||||||
|
if not hasattr(item.data, 'cssRules'): continue
|
||||||
|
for i, rule in enumerate(item.data.cssRules):
|
||||||
|
if rule.type != rule.FONT_FACE_RULE:
|
||||||
|
continue
|
||||||
|
props = self.get_font_properties(rule, default='normal')
|
||||||
|
if not props['font-family'] or not props['src']:
|
||||||
|
continue
|
||||||
|
|
||||||
|
path = item.abshref(props['src'])
|
||||||
|
ff = self.oeb.manifest.hrefs.get(urlnormalize(path), None)
|
||||||
|
if not ff:
|
||||||
|
continue
|
||||||
|
props['item'] = ff
|
||||||
|
if props['font-weight'] in {'bolder', 'lighter'}:
|
||||||
|
props['font-weight'] = '400'
|
||||||
|
props['weight'] = int(props['font-weight'])
|
||||||
|
props['chars'] = set()
|
||||||
|
props['rule'] = rule
|
||||||
|
self.embedded_fonts.append(props)
|
||||||
|
|
||||||
|
def find_style_rules(self):
|
||||||
|
'''
|
||||||
|
Extract all font related style information from all stylesheets into a
|
||||||
|
dict mapping classes to font properties specified by that class. All
|
||||||
|
the heavy lifting has already been done by the CSS flattening code.
|
||||||
|
'''
|
||||||
|
rules = defaultdict(dict)
|
||||||
|
for item in self.oeb.manifest:
|
||||||
|
if not hasattr(item.data, 'cssRules'): continue
|
||||||
|
for i, rule in enumerate(item.data.cssRules):
|
||||||
|
if rule.type != rule.STYLE_RULE:
|
||||||
|
continue
|
||||||
|
props = {k:v for k,v in
|
||||||
|
self.get_font_properties(rule).iteritems() if v}
|
||||||
|
if not props:
|
||||||
|
continue
|
||||||
|
for sel in rule.selectorList:
|
||||||
|
sel = sel.selectorText
|
||||||
|
if sel and sel.startswith('.'):
|
||||||
|
# We dont care about pseudo-selectors as the worst that
|
||||||
|
# can happen is some extra characters will remain in
|
||||||
|
# the font
|
||||||
|
sel = sel.partition(':')[0]
|
||||||
|
rules[sel[1:]].update(props)
|
||||||
|
|
||||||
|
self.style_rules = dict(rules)
|
||||||
|
|
||||||
|
def find_font_usage(self):
|
||||||
|
for item in self.oeb.manifest:
|
||||||
|
if not hasattr(item.data, 'xpath'): continue
|
||||||
|
for body in item.data.xpath('//*[local-name()="body"]'):
|
||||||
|
base = {'font-family':['serif'], 'font-weight': '400',
|
||||||
|
'font-style':'normal', 'font-stretch':'normal'}
|
||||||
|
self.find_usage_in(body, base)
|
||||||
|
|
||||||
|
def elem_style(self, cls, inherited_style):
|
||||||
|
'''
|
||||||
|
Find the effective style for the given element.
|
||||||
|
'''
|
||||||
|
classes = cls.split()
|
||||||
|
style = inherited_style.copy()
|
||||||
|
for cls in classes:
|
||||||
|
style.update(self.style_rules.get(cls, {}))
|
||||||
|
wt = style.get('font-weight', None)
|
||||||
|
pwt = inherited_style.get('font-weight', '400')
|
||||||
|
if wt == 'bolder':
|
||||||
|
style['font-weight'] = {
|
||||||
|
'100':'400',
|
||||||
|
'200':'400',
|
||||||
|
'300':'400',
|
||||||
|
'400':'700',
|
||||||
|
'500':'700',
|
||||||
|
}.get(pwt, '900')
|
||||||
|
elif wt == 'lighter':
|
||||||
|
style['font-weight'] = {
|
||||||
|
'600':'400', '700':'400',
|
||||||
|
'800':'700', '900':'700'}.get(pwt, '100')
|
||||||
|
|
||||||
|
return style
|
||||||
|
|
||||||
|
def used_font(self, style):
|
||||||
|
'''
|
||||||
|
Given a style find the embedded font that matches it. Returns None if
|
||||||
|
no match is found ( can happen if not family matches).
|
||||||
|
'''
|
||||||
|
ff = style.get('font-family', [])
|
||||||
|
lnames = {x.lower() for x in ff}
|
||||||
|
matching_set = []
|
||||||
|
|
||||||
|
# Filter on font-family
|
||||||
|
for ef in self.embedded_fonts:
|
||||||
|
flnames = {x.lower() for x in ef.get('font-family', [])}
|
||||||
|
if not lnames.intersection(flnames):
|
||||||
|
continue
|
||||||
|
matching_set.append(ef)
|
||||||
|
if not matching_set:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Filter on font-stretch
|
||||||
|
widths = {x:i for i, x in enumerate(( 'ultra-condensed',
|
||||||
|
'extra-condensed', 'condensed', 'semi-condensed', 'normal',
|
||||||
|
'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'
|
||||||
|
))}
|
||||||
|
|
||||||
|
width = widths[style.get('font-stretch', 'normal')]
|
||||||
|
for f in matching_set:
|
||||||
|
f['width'] = widths[style.get('font-stretch', 'normal')]
|
||||||
|
|
||||||
|
min_dist = min(abs(width-f['width']) for f in matching_set)
|
||||||
|
nearest = [f for f in matching_set if abs(width-f['width']) ==
|
||||||
|
min_dist]
|
||||||
|
if width <= 4:
|
||||||
|
lmatches = [f for f in nearest if f['width'] <= width]
|
||||||
|
else:
|
||||||
|
lmatches = [f for f in nearest if f['width'] >= width]
|
||||||
|
matching_set = (lmatches or nearest)
|
||||||
|
|
||||||
|
# Filter on font-style
|
||||||
|
fs = style.get('font-style', 'normal')
|
||||||
|
order = {
|
||||||
|
'oblique':['oblique', 'italic', 'normal'],
|
||||||
|
'normal':['normal', 'oblique', 'italic']
|
||||||
|
}.get(fs, ['italic', 'oblique', 'normal'])
|
||||||
|
for q in order:
|
||||||
|
matches = [f for f in matching_set if f.get('font-style', 'normal')
|
||||||
|
== q]
|
||||||
|
if matches:
|
||||||
|
matching_set = matches
|
||||||
|
break
|
||||||
|
|
||||||
|
# Filter on font weight
|
||||||
|
fw = int(style.get('font-weight', '400'))
|
||||||
|
if fw == 400:
|
||||||
|
q = [400, 500, 300, 200, 100, 600, 700, 800, 900]
|
||||||
|
elif fw == 500:
|
||||||
|
q = [500, 400, 300, 200, 100, 600, 700, 800, 900]
|
||||||
|
elif fw < 400:
|
||||||
|
q = [fw] + list(xrange(fw-100, -100, -100)) + list(xrange(fw+100,
|
||||||
|
100, 1000))
|
||||||
|
else:
|
||||||
|
q = [fw] + list(xrange(fw+100, 100, 1000)) + list(xrange(fw-100,
|
||||||
|
-100, -100))
|
||||||
|
for wt in q:
|
||||||
|
matches = [f for f in matching_set if f['weight'] == wt]
|
||||||
|
if matches:
|
||||||
|
return matches[0]
|
||||||
|
|
||||||
|
def find_chars(self, elem):
|
||||||
|
ans = set()
|
||||||
|
if elem.text:
|
||||||
|
ans |= set(elem.text)
|
||||||
|
for child in elem:
|
||||||
|
if child.tail:
|
||||||
|
ans |= set(child.tail)
|
||||||
|
return ans
|
||||||
|
|
||||||
|
def find_usage_in(self, elem, inherited_style):
|
||||||
|
style = self.elem_style(elem.get('class', ''), inherited_style)
|
||||||
|
for child in elem:
|
||||||
|
self.find_usage_in(child, style)
|
||||||
|
font = self.used_font(style)
|
||||||
|
if font:
|
||||||
|
chars = self.find_chars(elem)
|
||||||
|
if chars:
|
||||||
|
font['chars'] |= chars
|
||||||
|
|
||||||
|
|
@ -32,7 +32,7 @@ class LookAndFeelWidget(Widget, Ui_Form):
|
|||||||
Widget.__init__(self, parent,
|
Widget.__init__(self, parent,
|
||||||
['change_justification', 'extra_css', 'base_font_size',
|
['change_justification', 'extra_css', 'base_font_size',
|
||||||
'font_size_mapping', 'line_height', 'minimum_line_height',
|
'font_size_mapping', 'line_height', 'minimum_line_height',
|
||||||
'embed_font_family',
|
'embed_font_family', 'subset_embedded_fonts',
|
||||||
'smarten_punctuation', 'unsmarten_punctuation',
|
'smarten_punctuation', 'unsmarten_punctuation',
|
||||||
'disable_font_rescaling', 'insert_blank_line',
|
'disable_font_rescaling', 'insert_blank_line',
|
||||||
'remove_paragraph_spacing',
|
'remove_paragraph_spacing',
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
<rect>
|
<rect>
|
||||||
<x>0</x>
|
<x>0</x>
|
||||||
<y>0</y>
|
<y>0</y>
|
||||||
<width>655</width>
|
<width>699</width>
|
||||||
<height>619</height>
|
<height>619</height>
|
||||||
</rect>
|
</rect>
|
||||||
</property>
|
</property>
|
||||||
@ -406,7 +406,14 @@
|
|||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="6" column="1" colspan="2">
|
<item row="6" column="1" colspan="2">
|
||||||
<widget class="FontFamilyChooser" name="opt_embed_font_family"/>
|
<widget class="FontFamilyChooser" name="opt_embed_font_family" native="true"/>
|
||||||
|
</item>
|
||||||
|
<item row="6" column="3" colspan="2">
|
||||||
|
<widget class="QCheckBox" name="opt_subset_embedded_fonts">
|
||||||
|
<property name="text">
|
||||||
|
<string>&Subset all embedded fonts (Experimental)</string>
|
||||||
|
</property>
|
||||||
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
</layout>
|
</layout>
|
||||||
</widget>
|
</widget>
|
||||||
|
@ -7,8 +7,8 @@ __docformat__ = 'restructuredtext en'
|
|||||||
|
|
||||||
import functools
|
import functools
|
||||||
|
|
||||||
from PyQt4.Qt import Qt, QStackedWidget, QMenu, \
|
from PyQt4.Qt import (Qt, QStackedWidget, QMenu, QTimer,
|
||||||
QSize, QSizePolicy, QStatusBar, QLabel, QFont
|
QSize, QSizePolicy, QStatusBar, QLabel, QFont)
|
||||||
|
|
||||||
from calibre.utils.config import prefs
|
from calibre.utils.config import prefs
|
||||||
from calibre.constants import (isosx, __appname__, preferred_encoding,
|
from calibre.constants import (isosx, __appname__, preferred_encoding,
|
||||||
@ -274,7 +274,7 @@ class LayoutMixin(object): # {{{
|
|||||||
|
|
||||||
m = self.library_view.model()
|
m = self.library_view.model()
|
||||||
if m.rowCount(None) > 0:
|
if m.rowCount(None) > 0:
|
||||||
self.library_view.set_current_row(0)
|
QTimer.singleShot(0, self.library_view.set_current_row)
|
||||||
m.current_changed(self.library_view.currentIndex(),
|
m.current_changed(self.library_view.currentIndex(),
|
||||||
self.library_view.currentIndex())
|
self.library_view.currentIndex())
|
||||||
self.library_view.setFocus(Qt.OtherFocusReason)
|
self.library_view.setFocus(Qt.OtherFocusReason)
|
||||||
|
@ -777,7 +777,7 @@ class BooksView(QTableView): # {{{
|
|||||||
self.scrollTo(self.model().index(row, i), self.PositionAtCenter)
|
self.scrollTo(self.model().index(row, i), self.PositionAtCenter)
|
||||||
break
|
break
|
||||||
|
|
||||||
def set_current_row(self, row, select=True):
|
def set_current_row(self, row=0, select=True):
|
||||||
if row > -1 and row < self.model().rowCount(QModelIndex()):
|
if row > -1 and row < self.model().rowCount(QModelIndex()):
|
||||||
h = self.horizontalHeader()
|
h = self.horizontalHeader()
|
||||||
logical_indices = list(range(h.count()))
|
logical_indices = list(range(h.count()))
|
||||||
|
@ -188,6 +188,10 @@ class MetadataSingleDialogBase(ResizableDialog):
|
|||||||
self.tags_editor_button.setToolTip(_('Open Tag Editor'))
|
self.tags_editor_button.setToolTip(_('Open Tag Editor'))
|
||||||
self.tags_editor_button.setIcon(QIcon(I('chapters.png')))
|
self.tags_editor_button.setIcon(QIcon(I('chapters.png')))
|
||||||
self.tags_editor_button.clicked.connect(self.tags_editor)
|
self.tags_editor_button.clicked.connect(self.tags_editor)
|
||||||
|
self.clear_tags_button = QToolButton(self)
|
||||||
|
self.clear_tags_button.setToolTip(_('Clear all tags'))
|
||||||
|
self.clear_tags_button.setIcon(QIcon(I('trash.png')))
|
||||||
|
self.clear_tags_button.clicked.connect(self.tags.clear)
|
||||||
self.basic_metadata_widgets.append(self.tags)
|
self.basic_metadata_widgets.append(self.tags)
|
||||||
|
|
||||||
self.identifiers = IdentifiersEdit(self)
|
self.identifiers = IdentifiersEdit(self)
|
||||||
@ -656,9 +660,10 @@ class MetadataSingleDialog(MetadataSingleDialogBase): # {{{
|
|||||||
l.addItem(self.tabs[0].spc_one, 1, 0, 1, 3)
|
l.addItem(self.tabs[0].spc_one, 1, 0, 1, 3)
|
||||||
sto(self.cover.buttons[-1], self.rating)
|
sto(self.cover.buttons[-1], self.rating)
|
||||||
create_row2(1, self.rating)
|
create_row2(1, self.rating)
|
||||||
sto(self.rating, self.tags)
|
sto(self.rating, self.tags_editor_button)
|
||||||
create_row2(2, self.tags, self.tags_editor_button)
|
sto(self.tags_editor_button, self.tags)
|
||||||
sto(self.tags_editor_button, self.paste_isbn_button)
|
create_row2(2, self.tags, self.clear_tags_button, front_button=self.tags_editor_button)
|
||||||
|
sto(self.clear_tags_button, self.paste_isbn_button)
|
||||||
sto(self.paste_isbn_button, self.identifiers)
|
sto(self.paste_isbn_button, self.identifiers)
|
||||||
create_row2(3, self.identifiers, self.clear_identifiers_button,
|
create_row2(3, self.identifiers, self.clear_identifiers_button,
|
||||||
front_button=self.paste_isbn_button)
|
front_button=self.paste_isbn_button)
|
||||||
@ -761,6 +766,7 @@ class MetadataSingleDialogAlt1(MetadataSingleDialogBase): # {{{
|
|||||||
tl.addWidget(self.swap_title_author_button, 0, 0, 2, 1)
|
tl.addWidget(self.swap_title_author_button, 0, 0, 2, 1)
|
||||||
tl.addWidget(self.manage_authors_button, 2, 0, 1, 1)
|
tl.addWidget(self.manage_authors_button, 2, 0, 1, 1)
|
||||||
tl.addWidget(self.paste_isbn_button, 12, 0, 1, 1)
|
tl.addWidget(self.paste_isbn_button, 12, 0, 1, 1)
|
||||||
|
tl.addWidget(self.tags_editor_button, 6, 0, 1, 1)
|
||||||
|
|
||||||
create_row(0, self.title, self.title_sort,
|
create_row(0, self.title, self.title_sort,
|
||||||
button=self.deduce_title_sort_button, span=2,
|
button=self.deduce_title_sort_button, span=2,
|
||||||
@ -773,7 +779,7 @@ class MetadataSingleDialogAlt1(MetadataSingleDialogBase): # {{{
|
|||||||
create_row(4, self.series, self.series_index,
|
create_row(4, self.series, self.series_index,
|
||||||
button=self.clear_series_button, icon='trash.png')
|
button=self.clear_series_button, icon='trash.png')
|
||||||
create_row(5, self.series_index, self.tags)
|
create_row(5, self.series_index, self.tags)
|
||||||
create_row(6, self.tags, self.rating, button=self.tags_editor_button)
|
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
||||||
create_row(7, self.rating, self.pubdate)
|
create_row(7, self.rating, self.pubdate)
|
||||||
create_row(8, self.pubdate, self.publisher,
|
create_row(8, self.pubdate, self.publisher,
|
||||||
button=self.pubdate.clear_button, icon='trash.png')
|
button=self.pubdate.clear_button, icon='trash.png')
|
||||||
@ -785,7 +791,8 @@ class MetadataSingleDialogAlt1(MetadataSingleDialogBase): # {{{
|
|||||||
button=self.clear_identifiers_button, icon='trash.png')
|
button=self.clear_identifiers_button, icon='trash.png')
|
||||||
sto(self.clear_identifiers_button, self.swap_title_author_button)
|
sto(self.clear_identifiers_button, self.swap_title_author_button)
|
||||||
sto(self.swap_title_author_button, self.manage_authors_button)
|
sto(self.swap_title_author_button, self.manage_authors_button)
|
||||||
sto(self.manage_authors_button, self.paste_isbn_button)
|
sto(self.manage_authors_button, self.tags_editor_button)
|
||||||
|
sto(self.tags_editor_button, self.paste_isbn_button)
|
||||||
tl.addItem(QSpacerItem(1, 1, QSizePolicy.Fixed, QSizePolicy.Expanding),
|
tl.addItem(QSpacerItem(1, 1, QSizePolicy.Fixed, QSizePolicy.Expanding),
|
||||||
13, 1, 1 ,1)
|
13, 1, 1 ,1)
|
||||||
|
|
||||||
@ -896,6 +903,7 @@ class MetadataSingleDialogAlt2(MetadataSingleDialogBase): # {{{
|
|||||||
tl.addWidget(self.swap_title_author_button, 0, 0, 2, 1)
|
tl.addWidget(self.swap_title_author_button, 0, 0, 2, 1)
|
||||||
tl.addWidget(self.manage_authors_button, 2, 0, 2, 1)
|
tl.addWidget(self.manage_authors_button, 2, 0, 2, 1)
|
||||||
tl.addWidget(self.paste_isbn_button, 12, 0, 1, 1)
|
tl.addWidget(self.paste_isbn_button, 12, 0, 1, 1)
|
||||||
|
tl.addWidget(self.tags_editor_button, 6, 0, 1, 1)
|
||||||
|
|
||||||
create_row(0, self.title, self.title_sort,
|
create_row(0, self.title, self.title_sort,
|
||||||
button=self.deduce_title_sort_button, span=2,
|
button=self.deduce_title_sort_button, span=2,
|
||||||
@ -908,7 +916,7 @@ class MetadataSingleDialogAlt2(MetadataSingleDialogBase): # {{{
|
|||||||
create_row(4, self.series, self.series_index,
|
create_row(4, self.series, self.series_index,
|
||||||
button=self.clear_series_button, icon='trash.png')
|
button=self.clear_series_button, icon='trash.png')
|
||||||
create_row(5, self.series_index, self.tags)
|
create_row(5, self.series_index, self.tags)
|
||||||
create_row(6, self.tags, self.rating, button=self.tags_editor_button)
|
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
||||||
create_row(7, self.rating, self.pubdate)
|
create_row(7, self.rating, self.pubdate)
|
||||||
create_row(8, self.pubdate, self.publisher,
|
create_row(8, self.pubdate, self.publisher,
|
||||||
button=self.pubdate.clear_button, icon='trash.png')
|
button=self.pubdate.clear_button, icon='trash.png')
|
||||||
@ -920,7 +928,8 @@ class MetadataSingleDialogAlt2(MetadataSingleDialogBase): # {{{
|
|||||||
button=self.clear_identifiers_button, icon='trash.png')
|
button=self.clear_identifiers_button, icon='trash.png')
|
||||||
sto(self.clear_identifiers_button, self.swap_title_author_button)
|
sto(self.clear_identifiers_button, self.swap_title_author_button)
|
||||||
sto(self.swap_title_author_button, self.manage_authors_button)
|
sto(self.swap_title_author_button, self.manage_authors_button)
|
||||||
sto(self.manage_authors_button, self.paste_isbn_button)
|
sto(self.manage_authors_button, self.tags_editor_button)
|
||||||
|
sto(self.tags_editor_button, self.paste_isbn_button)
|
||||||
tl.addItem(QSpacerItem(1, 1, QSizePolicy.Fixed, QSizePolicy.Expanding),
|
tl.addItem(QSpacerItem(1, 1, QSizePolicy.Fixed, QSizePolicy.Expanding),
|
||||||
13, 1, 1 ,1)
|
13, 1, 1 ,1)
|
||||||
|
|
||||||
|
@ -18,6 +18,7 @@ from calibre.customize.ui import (initialized_plugins, is_disabled, enable_plugi
|
|||||||
remove_plugin, NameConflict)
|
remove_plugin, NameConflict)
|
||||||
from calibre.gui2 import (NONE, error_dialog, info_dialog, choose_files,
|
from calibre.gui2 import (NONE, error_dialog, info_dialog, choose_files,
|
||||||
question_dialog, gprefs)
|
question_dialog, gprefs)
|
||||||
|
from calibre.gui2.dialogs.confirm_delete import confirm
|
||||||
from calibre.utils.search_query_parser import SearchQueryParser
|
from calibre.utils.search_query_parser import SearchQueryParser
|
||||||
from calibre.utils.icu import lower
|
from calibre.utils.icu import lower
|
||||||
from calibre.constants import iswindows
|
from calibre.constants import iswindows
|
||||||
@ -363,6 +364,12 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
|||||||
if plugin.do_user_config(self.gui):
|
if plugin.do_user_config(self.gui):
|
||||||
self._plugin_model.refresh_plugin(plugin)
|
self._plugin_model.refresh_plugin(plugin)
|
||||||
elif op == 'remove':
|
elif op == 'remove':
|
||||||
|
if not confirm('<p>' +
|
||||||
|
_('Are you sure you want to remove the plugin: %s?')%
|
||||||
|
'<b>{0}</b>'.format(plugin.name),
|
||||||
|
'confirm_plugin_removal_msg', parent=self):
|
||||||
|
return
|
||||||
|
|
||||||
msg = _('Plugin <b>{0}</b> successfully removed').format(plugin.name)
|
msg = _('Plugin <b>{0}</b> successfully removed').format(plugin.name)
|
||||||
if remove_plugin(plugin):
|
if remove_plugin(plugin):
|
||||||
self._plugin_model.populate()
|
self._plugin_model.populate()
|
||||||
|
@ -6,102 +6,19 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from contextlib import closing
|
from calibre.gui2.store.stores.amazon_uk_plugin import AmazonUKKindleStore
|
||||||
|
|
||||||
from lxml import html
|
class AmazonDEKindleStore(AmazonUKKindleStore):
|
||||||
|
|
||||||
from PyQt4.Qt import QUrl
|
|
||||||
|
|
||||||
from calibre import browser
|
|
||||||
from calibre.gui2 import open_url
|
|
||||||
from calibre.gui2.store import StorePlugin
|
|
||||||
from calibre.gui2.store.search_result import SearchResult
|
|
||||||
|
|
||||||
class AmazonDEKindleStore(StorePlugin):
|
|
||||||
'''
|
'''
|
||||||
For comments on the implementation, please see amazon_plugin.py
|
For comments on the implementation, please see amazon_plugin.py
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
aff_id = {'tag': 'charhale0a-21'}
|
||||||
aff_id = {'tag': 'charhale0a-21'}
|
store_link = ('http://www.amazon.de/gp/redirect.html?ie=UTF8&site-redirect=de'
|
||||||
store_link = ('http://www.amazon.de/gp/redirect.html?ie=UTF8&site-redirect=de'
|
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=19454'
|
||||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=19454'
|
'&location=http://www.amazon.de/ebooks-kindle/b?node=530886031')
|
||||||
'&location=http://www.amazon.de/ebooks-kindle/b?node=530886031') % aff_id
|
store_link_details = ('http://www.amazon.de/gp/redirect.html?ie=UTF8'
|
||||||
if detail_item:
|
|
||||||
aff_id['asin'] = detail_item
|
|
||||||
store_link = ('http://www.amazon.de/gp/redirect.html?ie=UTF8'
|
|
||||||
'&location=http://www.amazon.de/dp/%(asin)s&site-redirect=de'
|
'&location=http://www.amazon.de/dp/%(asin)s&site-redirect=de'
|
||||||
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=6742') % aff_id
|
'&tag=%(tag)s&linkCode=ur2&camp=1638&creative=6742')
|
||||||
open_url(QUrl(store_link))
|
search_url = 'http://www.amazon.de/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
|
||||||
search_url = 'http://www.amazon.de/s/?url=search-alias%3Ddigital-text&field-keywords='
|
|
||||||
url = search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
|
||||||
br = browser()
|
|
||||||
|
|
||||||
counter = max_results
|
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
|
||||||
# doc = html.fromstring(f.read().decode('latin-1', 'replace'))
|
|
||||||
# Apparently amazon Europe is responding in UTF-8 now
|
|
||||||
doc = html.fromstring(f.read())
|
|
||||||
|
|
||||||
data_xpath = '//div[contains(@class, "result") and contains(@class, "product")]'
|
|
||||||
format_xpath = './/span[@class="format"]/text()'
|
|
||||||
cover_xpath = './/img[@class="productImage"]/@src'
|
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
|
||||||
if counter <= 0:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
|
||||||
# put in results for non Kindle books (author pages). So we need
|
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
|
||||||
# if it isn't.
|
|
||||||
format = ''.join(data.xpath(format_xpath))
|
|
||||||
if 'kindle' not in format.lower():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
|
||||||
# book later.
|
|
||||||
asin = ''.join(data.xpath("@name"))
|
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
|
||||||
price = ''.join(data.xpath('.//div[@class="newPrice"]/span[contains(@class, "price")]/text()'))
|
|
||||||
|
|
||||||
author = ''.join(data.xpath('.//h3[@class="title"]/span[@class="ptBrand"]/text()'))
|
|
||||||
if author.startswith('von '):
|
|
||||||
author = author[4:]
|
|
||||||
|
|
||||||
counter -= 1
|
|
||||||
|
|
||||||
s = SearchResult()
|
|
||||||
s.cover_url = cover_url.strip()
|
|
||||||
s.title = title.strip()
|
|
||||||
s.author = author.strip()
|
|
||||||
s.price = price.strip()
|
|
||||||
s.detail_item = asin.strip()
|
|
||||||
s.formats = 'Kindle'
|
|
||||||
|
|
||||||
yield s
|
|
||||||
|
|
||||||
def get_details(self, search_result, timeout):
|
|
||||||
drm_search_text = u'Gleichzeitige Verwendung von Geräten'
|
|
||||||
drm_free_text = u'Keine Einschränkung'
|
|
||||||
url = 'http://amazon.de/dp/'
|
|
||||||
|
|
||||||
br = browser()
|
|
||||||
with closing(br.open(url + search_result.detail_item, timeout=timeout)) as nf:
|
|
||||||
idata = html.fromstring(nf.read())
|
|
||||||
if idata.xpath('boolean(//div[@class="content"]//li/b[contains(text(), "' +
|
|
||||||
drm_search_text + '")])'):
|
|
||||||
if idata.xpath('boolean(//div[@class="content"]//li[contains(., "' +
|
|
||||||
drm_free_text + '") and contains(b, "' +
|
|
||||||
drm_search_text + '")])'):
|
|
||||||
search_result.drm = SearchResult.DRM_UNLOCKED
|
|
||||||
else:
|
|
||||||
search_result.drm = SearchResult.DRM_UNKNOWN
|
|
||||||
else:
|
|
||||||
search_result.drm = SearchResult.DRM_LOCKED
|
|
||||||
return True
|
|
||||||
|
@ -6,78 +6,17 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from contextlib import closing
|
from calibre.gui2.store.stores.amazon_uk_plugin import AmazonUKKindleStore
|
||||||
|
|
||||||
from lxml import html
|
class AmazonESKindleStore(AmazonUKKindleStore):
|
||||||
|
|
||||||
from PyQt4.Qt import QUrl
|
|
||||||
|
|
||||||
from calibre import browser
|
|
||||||
from calibre.gui2 import open_url
|
|
||||||
from calibre.gui2.store import StorePlugin
|
|
||||||
from calibre.gui2.store.search_result import SearchResult
|
|
||||||
|
|
||||||
class AmazonESKindleStore(StorePlugin):
|
|
||||||
'''
|
'''
|
||||||
For comments on the implementation, please see amazon_plugin.py
|
For comments on the implementation, please see amazon_plugin.py
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
aff_id = {'tag': 'charhale09-21'}
|
||||||
aff_id = {'tag': 'charhale09-21'}
|
store_link = ('http://www.amazon.es/ebooks-kindle/b?_encoding=UTF8&'
|
||||||
store_link = 'http://www.amazon.es/ebooks-kindle/b?_encoding=UTF8&node=827231031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3626&creative=24790' % aff_id
|
'node=827231031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3626&creative=24790')
|
||||||
if detail_item:
|
store_link_details = ('http://www.amazon.es/gp/redirect.html?ie=UTF8&'
|
||||||
aff_id['asin'] = detail_item
|
'location=http://www.amazon.es/dp/%(asin)s&tag=%(tag)s'
|
||||||
store_link = 'http://www.amazon.es/gp/redirect.html?ie=UTF8&location=http://www.amazon.es/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=3626&creative=24790' % aff_id
|
'&linkCode=ur2&camp=3626&creative=24790')
|
||||||
open_url(QUrl(store_link))
|
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
|
||||||
search_url = 'http://www.amazon.es/s/?url=search-alias%3Ddigital-text&field-keywords='
|
|
||||||
url = search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
|
||||||
br = browser()
|
|
||||||
|
|
||||||
counter = max_results
|
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
|
||||||
# doc = html.fromstring(f.read().decode('latin-1', 'replace'))
|
|
||||||
# Apparently amazon Europe is responding in UTF-8 now
|
|
||||||
doc = html.fromstring(f.read())
|
|
||||||
|
|
||||||
data_xpath = '//div[contains(@class, "result") and contains(@class, "product")]'
|
|
||||||
format_xpath = './/span[@class="format"]/text()'
|
|
||||||
cover_xpath = './/img[@class="productImage"]/@src'
|
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
|
||||||
if counter <= 0:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
|
||||||
# put in results for non Kindle books (author pages). So we need
|
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
|
||||||
# if it isn't.
|
|
||||||
format = ''.join(data.xpath(format_xpath))
|
|
||||||
if 'kindle' not in format.lower():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
|
||||||
# book later.
|
|
||||||
asin = ''.join(data.xpath("@name"))
|
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
|
||||||
price = ''.join(data.xpath('.//div[@class="newPrice"]/span[contains(@class, "price")]/text()'))
|
|
||||||
author = unicode(''.join(data.xpath('.//h3[@class="title"]/span[@class="ptBrand"]/text()')))
|
|
||||||
if author.startswith('de '):
|
|
||||||
author = author[3:]
|
|
||||||
|
|
||||||
counter -= 1
|
|
||||||
|
|
||||||
s = SearchResult()
|
|
||||||
s.cover_url = cover_url.strip()
|
|
||||||
s.title = title.strip()
|
|
||||||
s.author = author.strip()
|
|
||||||
s.price = price.strip()
|
|
||||||
s.detail_item = asin.strip()
|
|
||||||
s.formats = 'Kindle'
|
|
||||||
s.drm = SearchResult.DRM_UNKNOWN
|
|
||||||
|
|
||||||
yield s
|
|
@ -6,79 +6,16 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from contextlib import closing
|
|
||||||
|
|
||||||
from lxml import html
|
from calibre.gui2.store.stores.amazon_uk_plugin import AmazonUKKindleStore
|
||||||
|
|
||||||
from PyQt4.Qt import QUrl
|
class AmazonFRKindleStore(AmazonUKKindleStore):
|
||||||
|
|
||||||
from calibre import browser
|
|
||||||
from calibre.gui2 import open_url
|
|
||||||
from calibre.gui2.store import StorePlugin
|
|
||||||
from calibre.gui2.store.search_result import SearchResult
|
|
||||||
|
|
||||||
class AmazonFRKindleStore(StorePlugin):
|
|
||||||
'''
|
'''
|
||||||
For comments on the implementation, please see amazon_plugin.py
|
For comments on the implementation, please see amazon_plugin.py
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
aff_id = {'tag': 'charhale-21'}
|
||||||
aff_id = {'tag': 'charhale-21'}
|
store_link = 'http://www.amazon.fr/livres-kindle/b?ie=UTF8&node=695398031&ref_=sa_menu_kbo1&_encoding=UTF8&tag=%(tag)s&linkCode=ur2&camp=1642&creative=19458' % aff_id
|
||||||
store_link = 'http://www.amazon.fr/livres-kindle/b?ie=UTF8&node=695398031&ref_=sa_menu_kbo1&_encoding=UTF8&tag=%(tag)s&linkCode=ur2&camp=1642&creative=19458' % aff_id
|
store_link_details = 'http://www.amazon.fr/gp/redirect.html?ie=UTF8&location=http://www.amazon.fr/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=1634&creative=6738'
|
||||||
|
search_url = 'http://www.amazon.fr/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
if detail_item:
|
|
||||||
aff_id['asin'] = detail_item
|
|
||||||
store_link = 'http://www.amazon.fr/gp/redirect.html?ie=UTF8&location=http://www.amazon.fr/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=1634&creative=6738' % aff_id
|
|
||||||
open_url(QUrl(store_link))
|
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
|
||||||
search_url = 'http://www.amazon.fr/s/?url=search-alias%3Ddigital-text&field-keywords='
|
|
||||||
url = search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
|
||||||
br = browser()
|
|
||||||
|
|
||||||
counter = max_results
|
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
|
||||||
# doc = html.fromstring(f.read().decode('latin-1', 'replace'))
|
|
||||||
# Apparently amazon Europe is responding in UTF-8 now
|
|
||||||
doc = html.fromstring(f.read())
|
|
||||||
|
|
||||||
data_xpath = '//div[contains(@class, "result") and contains(@class, "product")]'
|
|
||||||
format_xpath = './/span[@class="format"]/text()'
|
|
||||||
cover_xpath = './/img[@class="productImage"]/@src'
|
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
|
||||||
if counter <= 0:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
|
||||||
# put in results for non Kindle books (author pages). So we need
|
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
|
||||||
# if it isn't.
|
|
||||||
format = ''.join(data.xpath(format_xpath))
|
|
||||||
if 'kindle' not in format.lower():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
|
||||||
# book later.
|
|
||||||
asin = ''.join(data.xpath("@name"))
|
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
|
||||||
price = ''.join(data.xpath('.//div[@class="newPrice"]/span[contains(@class, "price")]/text()'))
|
|
||||||
author = unicode(''.join(data.xpath('.//h3[@class="title"]/span[@class="ptBrand"]/text()')))
|
|
||||||
if author.startswith('de '):
|
|
||||||
author = author[3:]
|
|
||||||
|
|
||||||
counter -= 1
|
|
||||||
|
|
||||||
s = SearchResult()
|
|
||||||
s.cover_url = cover_url.strip()
|
|
||||||
s.title = title.strip()
|
|
||||||
s.author = author.strip()
|
|
||||||
s.price = price.strip()
|
|
||||||
s.detail_item = asin.strip()
|
|
||||||
s.formats = 'Kindle'
|
|
||||||
s.drm = SearchResult.DRM_UNKNOWN
|
|
||||||
|
|
||||||
yield s
|
|
||||||
|
@ -6,78 +6,17 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from contextlib import closing
|
from calibre.gui2.store.stores.amazon_uk_plugin import AmazonUKKindleStore
|
||||||
|
|
||||||
from lxml import html
|
class AmazonITKindleStore(AmazonUKKindleStore):
|
||||||
|
|
||||||
from PyQt4.Qt import QUrl
|
|
||||||
|
|
||||||
from calibre import browser
|
|
||||||
from calibre.gui2 import open_url
|
|
||||||
from calibre.gui2.store import StorePlugin
|
|
||||||
from calibre.gui2.store.search_result import SearchResult
|
|
||||||
|
|
||||||
class AmazonITKindleStore(StorePlugin):
|
|
||||||
'''
|
'''
|
||||||
For comments on the implementation, please see amazon_plugin.py
|
For comments on the implementation, please see amazon_plugin.py
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
aff_id = {'tag': 'httpcharles07-21'}
|
||||||
aff_id = {'tag': 'httpcharles07-21'}
|
store_link = ('http://www.amazon.it/ebooks-kindle/b?_encoding=UTF8&'
|
||||||
store_link = 'http://www.amazon.it/ebooks-kindle/b?_encoding=UTF8&node=827182031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3370&creative=23322' % aff_id
|
'node=827182031&tag=%(tag)s&ie=UTF8&linkCode=ur2&camp=3370&creative=23322')
|
||||||
if detail_item:
|
store_link_details = ('http://www.amazon.it/gp/redirect.html?ie=UTF8&'
|
||||||
aff_id['asin'] = detail_item
|
'location=http://www.amazon.it/dp/%(asin)s&tag=%(tag)s&'
|
||||||
store_link = 'http://www.amazon.it/gp/redirect.html?ie=UTF8&location=http://www.amazon.it/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=3370&creative=23322' % aff_id
|
'linkCode=ur2&camp=3370&creative=23322')
|
||||||
open_url(QUrl(store_link))
|
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
|
||||||
search_url = 'http://www.amazon.it/s/?url=search-alias%3Ddigital-text&field-keywords='
|
|
||||||
url = search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
|
||||||
br = browser()
|
|
||||||
|
|
||||||
counter = max_results
|
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
|
||||||
# doc = html.fromstring(f.read().decode('latin-1', 'replace'))
|
|
||||||
# Apparently amazon Europe is responding in UTF-8 now
|
|
||||||
doc = html.fromstring(f.read())
|
|
||||||
|
|
||||||
data_xpath = '//div[contains(@class, "result") and contains(@class, "product")]'
|
|
||||||
format_xpath = './/span[@class="format"]/text()'
|
|
||||||
cover_xpath = './/img[@class="productImage"]/@src'
|
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
|
||||||
if counter <= 0:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
|
||||||
# put in results for non Kindle books (author pages). So we need
|
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
|
||||||
# if it isn't.
|
|
||||||
format = ''.join(data.xpath(format_xpath))
|
|
||||||
if 'kindle' not in format.lower():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
|
||||||
# book later.
|
|
||||||
asin = ''.join(data.xpath("@name"))
|
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
|
||||||
price = ''.join(data.xpath('.//div[@class="newPrice"]/span[contains(@class, "price")]/text()'))
|
|
||||||
author = unicode(''.join(data.xpath('.//h3[@class="title"]/span[@class="ptBrand"]/text()')))
|
|
||||||
if author.startswith('di '):
|
|
||||||
author = author[3:]
|
|
||||||
|
|
||||||
counter -= 1
|
|
||||||
|
|
||||||
s = SearchResult()
|
|
||||||
s.cover_url = cover_url.strip()
|
|
||||||
s.title = title.strip()
|
|
||||||
s.author = author.strip()
|
|
||||||
s.price = price.strip()
|
|
||||||
s.detail_item = asin.strip()
|
|
||||||
s.formats = 'Kindle'
|
|
||||||
s.drm = SearchResult.DRM_UNKNOWN
|
|
||||||
|
|
||||||
yield s
|
|
||||||
|
@ -6,8 +6,9 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from contextlib import closing
|
import re
|
||||||
|
|
||||||
|
from contextlib import closing
|
||||||
from lxml import html
|
from lxml import html
|
||||||
|
|
||||||
from PyQt4.Qt import QUrl
|
from PyQt4.Qt import QUrl
|
||||||
@ -18,57 +19,80 @@ from calibre.gui2.store import StorePlugin
|
|||||||
from calibre.gui2.store.search_result import SearchResult
|
from calibre.gui2.store.search_result import SearchResult
|
||||||
|
|
||||||
class AmazonUKKindleStore(StorePlugin):
|
class AmazonUKKindleStore(StorePlugin):
|
||||||
|
aff_id = {'tag': 'calcharles-21'}
|
||||||
|
store_link = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||||
|
'location=http://www.amazon.co.uk/Kindle-eBooks/b?'
|
||||||
|
'ie=UTF8&node=341689031&ref_=sa_menu_kbo2&tag=%(tag)s&'
|
||||||
|
'linkCode=ur2&camp=1634&creative=19450')
|
||||||
|
store_link_details = ('http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&'
|
||||||
|
'location=http://www.amazon.co.uk/dp/%(asin)s&tag=%(tag)s&'
|
||||||
|
'linkCode=ur2&camp=1634&creative=6738')
|
||||||
|
search_url = 'http://www.amazon.co.uk/s/?url=search-alias%3Ddigital-text&field-keywords='
|
||||||
|
|
||||||
'''
|
'''
|
||||||
For comments on the implementation, please see amazon_plugin.py
|
For comments on the implementation, please see amazon_plugin.py
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
def open(self, parent=None, detail_item=None, external=False):
|
||||||
aff_id = {'tag': 'calcharles-21'}
|
|
||||||
store_link = 'http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&location=http://www.amazon.co.uk/Kindle-eBooks/b?ie=UTF8&node=341689031&ref_=sa_menu_kbo2&tag=%(tag)s&linkCode=ur2&camp=1634&creative=19450' % aff_id
|
|
||||||
|
|
||||||
|
store_link = self.store_link % self.aff_id
|
||||||
if detail_item:
|
if detail_item:
|
||||||
aff_id['asin'] = detail_item
|
self.aff_id['asin'] = detail_item
|
||||||
store_link = 'http://www.amazon.co.uk/gp/redirect.html?ie=UTF8&location=http://www.amazon.co.uk/dp/%(asin)s&tag=%(tag)s&linkCode=ur2&camp=1634&creative=6738' % aff_id
|
store_link = self.store_link_details % self.aff_id
|
||||||
open_url(QUrl(store_link))
|
open_url(QUrl(store_link))
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
def search(self, query, max_results=10, timeout=60):
|
||||||
search_url = 'http://www.amazon.co.uk/s/?url=search-alias%3Ddigital-text&field-keywords='
|
url = self.search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
||||||
url = search_url + query.encode('ascii', 'backslashreplace').replace('%', '%25').replace('\\x', '%').replace(' ', '+')
|
|
||||||
br = browser()
|
br = browser()
|
||||||
|
|
||||||
counter = max_results
|
counter = max_results
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
with closing(br.open(url, timeout=timeout)) as f:
|
||||||
# Apparently amazon Europe is responding in UTF-8 now
|
doc = html.fromstring(f.read())#.decode('latin-1', 'replace'))
|
||||||
doc = html.fromstring(f.read())
|
|
||||||
|
|
||||||
data_xpath = '//div[contains(@class, "result") and contains(@class, "product")]'
|
data_xpath = '//div[contains(@class, "prod")]'
|
||||||
format_xpath = './/span[@class="format"]/text()'
|
format_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and not(contains(@class, "bld"))]/text()'
|
||||||
|
asin_xpath = './/div[@class="image"]/a[1]'
|
||||||
cover_xpath = './/img[@class="productImage"]/@src'
|
cover_xpath = './/img[@class="productImage"]/@src'
|
||||||
|
title_xpath = './/h3[@class="newaps"]/a//text()'
|
||||||
|
author_xpath = './/h3[@class="newaps"]//span[contains(@class, "reg")]/text()'
|
||||||
|
price_xpath = './/ul[contains(@class, "rsltL")]//span[contains(@class, "lrg") and contains(@class, "bld")]/text()'
|
||||||
|
|
||||||
for data in doc.xpath(data_xpath):
|
for data in doc.xpath(data_xpath):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
# Even though we are searching digital-text only Amazon will still
|
# Even though we are searching digital-text only Amazon will still
|
||||||
# put in results for non Kindle books (author pages). So we need
|
# put in results for non Kindle books (author pages). Se we need
|
||||||
# to explicitly check if the item is a Kindle book and ignore it
|
# to explicitly check if the item is a Kindle book and ignore it
|
||||||
# if it isn't.
|
# if it isn't.
|
||||||
format = ''.join(data.xpath(format_xpath))
|
format_ = ''.join(data.xpath(format_xpath))
|
||||||
if 'kindle' not in format.lower():
|
if 'kindle' not in format_.lower():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# We must have an asin otherwise we can't easily reference the
|
# We must have an asin otherwise we can't easily reference the
|
||||||
# book later.
|
# book later.
|
||||||
asin = ''.join(data.xpath("@name"))
|
asin_href = None
|
||||||
|
asin_a = data.xpath(asin_xpath)
|
||||||
|
if asin_a:
|
||||||
|
asin_href = asin_a[0].get('href', '')
|
||||||
|
m = re.search(r'/dp/(?P<asin>.+?)(/|$)', asin_href)
|
||||||
|
if m:
|
||||||
|
asin = m.group('asin')
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath(cover_xpath))
|
cover_url = ''.join(data.xpath(cover_xpath))
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
title = ''.join(data.xpath(title_xpath))
|
||||||
price = ''.join(data.xpath('.//div[@class="newPrice"]/span[contains(@class, "price")]/text()'))
|
author = ''.join(data.xpath(author_xpath))
|
||||||
|
try:
|
||||||
|
author = author.split('by ', 1)[1].split(" (")[0]
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
author = ''.join(data.xpath('.//h3[@class="title"]/span[@class="ptBrand"]/text()'))
|
price = ''.join(data.xpath(price_xpath))
|
||||||
if author.startswith('by '):
|
|
||||||
author = author[3:]
|
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
|
|
||||||
@ -78,37 +102,10 @@ class AmazonUKKindleStore(StorePlugin):
|
|||||||
s.author = author.strip()
|
s.author = author.strip()
|
||||||
s.price = price.strip()
|
s.price = price.strip()
|
||||||
s.detail_item = asin.strip()
|
s.detail_item = asin.strip()
|
||||||
|
s.drm = SearchResult.DRM_UNKNOWN
|
||||||
s.formats = 'Kindle'
|
s.formats = 'Kindle'
|
||||||
|
|
||||||
yield s
|
yield s
|
||||||
|
|
||||||
def get_details(self, search_result, timeout):
|
def get_details(self, search_result, timeout):
|
||||||
# We might already have been called.
|
pass
|
||||||
if search_result.drm:
|
|
||||||
return
|
|
||||||
|
|
||||||
url = 'http://amazon.co.uk/dp/'
|
|
||||||
drm_search_text = u'Simultaneous Device Usage'
|
|
||||||
drm_free_text = u'Unlimited'
|
|
||||||
|
|
||||||
br = browser()
|
|
||||||
with closing(br.open(url + search_result.detail_item, timeout=timeout)) as nf:
|
|
||||||
idata = html.fromstring(nf.read())
|
|
||||||
if not search_result.author:
|
|
||||||
search_result.author = ''.join(idata.xpath('//div[@class="buying" and contains(., "Author")]/a/text()'))
|
|
||||||
is_kindle = idata.xpath('boolean(//div[@class="buying"]/h1/span/span[contains(text(), "Kindle Edition")])')
|
|
||||||
if is_kindle:
|
|
||||||
search_result.formats = 'Kindle'
|
|
||||||
if idata.xpath('boolean(//div[@class="content"]//li/b[contains(text(), "' +
|
|
||||||
drm_search_text + '")])'):
|
|
||||||
if idata.xpath('boolean(//div[@class="content"]//li[contains(., "' +
|
|
||||||
drm_free_text + '") and contains(b, "' +
|
|
||||||
drm_search_text + '")])'):
|
|
||||||
search_result.drm = SearchResult.DRM_UNLOCKED
|
|
||||||
else:
|
|
||||||
search_result.drm = SearchResult.DRM_UNKNOWN
|
|
||||||
else:
|
|
||||||
search_result.drm = SearchResult.DRM_LOCKED
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ class LibreDEStore(BasicStoreConfig, StorePlugin):
|
|||||||
def open(self, parent=None, detail_item=None, external=False):
|
def open(self, parent=None, detail_item=None, external=False):
|
||||||
url = 'http://ad.zanox.com/ppc/?18817073C15644254T'
|
url = 'http://ad.zanox.com/ppc/?18817073C15644254T'
|
||||||
url_details = ('http://ad.zanox.com/ppc/?18817073C15644254T&ULP=[['
|
url_details = ('http://ad.zanox.com/ppc/?18817073C15644254T&ULP=[['
|
||||||
'http://www.libri.de/shop/action/productDetails?artiId={0}]]')
|
'http://www.ebook.de/shop/action/productDetails?artiId={0}]]')
|
||||||
|
|
||||||
if external or self.config.get('open_external', False):
|
if external or self.config.get('open_external', False):
|
||||||
if detail_item:
|
if detail_item:
|
||||||
@ -41,33 +41,38 @@ class LibreDEStore(BasicStoreConfig, StorePlugin):
|
|||||||
d.exec_()
|
d.exec_()
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
def search(self, query, max_results=10, timeout=60):
|
||||||
url = ('http://www.libri.de/shop/action/quickSearch?facetNodeId=6'
|
url = ('http://www.ebook.de/de/pathSearch?nav=52122&searchString='
|
||||||
'&mainsearchSubmit=Los!&searchString=' + urllib2.quote(query))
|
+ urllib2.quote(query))
|
||||||
br = browser()
|
br = browser()
|
||||||
|
|
||||||
counter = max_results
|
counter = max_results
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
with closing(br.open(url, timeout=timeout)) as f:
|
||||||
doc = html.fromstring(f.read())
|
doc = html.fromstring(f.read())
|
||||||
for data in doc.xpath('//div[contains(@class, "item")]'):
|
for data in doc.xpath('//div[contains(@class, "articlecontainer")]'):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
details = data.xpath('./div[@class="beschreibungContainer"]')
|
details = data.xpath('./div[@class="articleinfobox"]')
|
||||||
if not details:
|
if not details:
|
||||||
continue
|
continue
|
||||||
details = details[0]
|
details = details[0]
|
||||||
id = ''.join(details.xpath('./div[@class="text"]/a/@name')).strip()
|
id_ = ''.join(details.xpath('./a/@name')).strip()
|
||||||
if not id:
|
if not id_:
|
||||||
continue
|
continue
|
||||||
cover_url = ''.join(details.xpath('.//div[@class="coverImg"]/a/img/@src'))
|
title = ''.join(details.xpath('.//a[@class="su1_c_l_titel"]/text()')).strip()
|
||||||
title = ''.join(details.xpath('./div[@class="text"]/span[@class="titel"]/a/text()')).strip()
|
|
||||||
author = ''.join(details.xpath('./div[@class="text"]/span[@class="author"]/text()')).strip()
|
author = ''.join(details.xpath('.//div[@class="author"]/text()')).strip()
|
||||||
|
if author.startswith('von'):
|
||||||
|
author = author[4:]
|
||||||
|
|
||||||
pdf = details.xpath(
|
pdf = details.xpath(
|
||||||
'boolean(.//span[@class="format" and contains(text(), "pdf")]/text())')
|
'boolean(.//span[@class="bindername" and contains(text(), "pdf")]/text())')
|
||||||
epub = details.xpath(
|
epub = details.xpath(
|
||||||
'boolean(.//span[@class="format" and contains(text(), "epub")]/text())')
|
'boolean(.//span[@class="bindername" and contains(text(), "epub")]/text())')
|
||||||
mobi = details.xpath(
|
mobi = details.xpath(
|
||||||
'boolean(.//span[@class="format" and contains(text(), "mobipocket")]/text())')
|
'boolean(.//span[@class="bindername" and contains(text(), "mobipocket")]/text())')
|
||||||
|
|
||||||
|
cover_url = ''.join(data.xpath('.//div[@class="coverImg"]/a/img/@src'))
|
||||||
price = ''.join(data.xpath('.//span[@class="preis"]/text()')).replace('*', '').strip()
|
price = ''.join(data.xpath('.//span[@class="preis"]/text()')).replace('*', '').strip()
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
@ -78,7 +83,7 @@ class LibreDEStore(BasicStoreConfig, StorePlugin):
|
|||||||
s.author = author.strip()
|
s.author = author.strip()
|
||||||
s.price = price
|
s.price = price
|
||||||
s.drm = SearchResult.DRM_UNKNOWN
|
s.drm = SearchResult.DRM_UNKNOWN
|
||||||
s.detail_item = id
|
s.detail_item = id_
|
||||||
formats = []
|
formats = []
|
||||||
if epub:
|
if epub:
|
||||||
formats.append('ePub')
|
formats.append('ePub')
|
||||||
|
@ -196,6 +196,8 @@ class QueueBulk(QProgressDialog):
|
|||||||
dtitle = unicode(mi.title)
|
dtitle = unicode(mi.title)
|
||||||
except:
|
except:
|
||||||
dtitle = repr(mi.title)
|
dtitle = repr(mi.title)
|
||||||
|
if len(dtitle) > 50:
|
||||||
|
dtitle = dtitle[:50].rpartition(' ')[0]+'...'
|
||||||
self.setLabelText(_('Queueing ')+dtitle)
|
self.setLabelText(_('Queueing ')+dtitle)
|
||||||
desc = _('Convert book %(num)d of %(tot)d (%(title)s)') % dict(
|
desc = _('Convert book %(num)d of %(tot)d (%(title)s)') % dict(
|
||||||
num=self.i, tot=len(self.book_ids), title=dtitle)
|
num=self.i, tot=len(self.book_ids), title=dtitle)
|
||||||
|
@ -23,13 +23,16 @@ FIELDS = set(['title', 'authors', 'author_sort', 'publisher', 'rating',
|
|||||||
'formats', 'isbn', 'uuid', 'pubdate', 'cover', 'last_modified',
|
'formats', 'isbn', 'uuid', 'pubdate', 'cover', 'last_modified',
|
||||||
'identifiers'])
|
'identifiers'])
|
||||||
|
|
||||||
|
do_notify = True
|
||||||
def send_message(msg=''):
|
def send_message(msg=''):
|
||||||
|
global do_notify
|
||||||
|
if not do_notify:
|
||||||
|
return
|
||||||
prints('Notifying calibre of the change')
|
prints('Notifying calibre of the change')
|
||||||
from calibre.utils.ipc import RC
|
from calibre.utils.ipc import RC
|
||||||
import time
|
|
||||||
t = RC(print_error=False)
|
t = RC(print_error=False)
|
||||||
t.start()
|
t.start()
|
||||||
time.sleep(3)
|
t.join(3)
|
||||||
if t.done:
|
if t.done:
|
||||||
t.conn.send('refreshdb:'+msg)
|
t.conn.send('refreshdb:'+msg)
|
||||||
t.conn.close()
|
t.conn.close()
|
||||||
@ -42,16 +45,22 @@ def get_parser(usage):
|
|||||||
parser = OptionParser(usage)
|
parser = OptionParser(usage)
|
||||||
go = parser.add_option_group(_('GLOBAL OPTIONS'))
|
go = parser.add_option_group(_('GLOBAL OPTIONS'))
|
||||||
go.add_option('--library-path', '--with-library', default=None, help=_('Path to the calibre library. Default is to use the path stored in the settings.'))
|
go.add_option('--library-path', '--with-library', default=None, help=_('Path to the calibre library. Default is to use the path stored in the settings.'))
|
||||||
|
go.add_option('--dont-notify-gui', default=False, action='store_true',
|
||||||
|
help=_('Do not notify the running calibre GUI (if any) that the database has'
|
||||||
|
' changed. Use with care, as it can lead to database corruption!'))
|
||||||
|
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
def get_db(dbpath, options):
|
def get_db(dbpath, options):
|
||||||
|
global do_notify
|
||||||
if options.library_path is not None:
|
if options.library_path is not None:
|
||||||
dbpath = options.library_path
|
dbpath = options.library_path
|
||||||
if dbpath is None:
|
if dbpath is None:
|
||||||
raise ValueError('No saved library path, either run the GUI or use the'
|
raise ValueError('No saved library path, either run the GUI or use the'
|
||||||
' --with-library option')
|
' --with-library option')
|
||||||
dbpath = os.path.abspath(dbpath)
|
dbpath = os.path.abspath(dbpath)
|
||||||
|
if options.dont_notify_gui:
|
||||||
|
do_notify = False
|
||||||
return LibraryDatabase2(dbpath)
|
return LibraryDatabase2(dbpath)
|
||||||
|
|
||||||
def do_list(db, fields, afields, sort_by, ascending, search_text, line_width, separator,
|
def do_list(db, fields, afields, sort_by, ascending, search_text, line_width, separator,
|
||||||
|
@ -292,6 +292,7 @@ class CustomColumns(object):
|
|||||||
if num is not None:
|
if num is not None:
|
||||||
data = self.custom_column_num_map[num]
|
data = self.custom_column_num_map[num]
|
||||||
table,lt = self.custom_table_names(data['num'])
|
table,lt = self.custom_table_names(data['num'])
|
||||||
|
self.dirty_books_referencing('#'+data['label'], id, commit=False)
|
||||||
self.conn.execute('DELETE FROM %s WHERE value=?'%lt, (id,))
|
self.conn.execute('DELETE FROM %s WHERE value=?'%lt, (id,))
|
||||||
self.conn.execute('DELETE FROM %s WHERE id=?'%table, (id,))
|
self.conn.execute('DELETE FROM %s WHERE id=?'%table, (id,))
|
||||||
self.conn.commit()
|
self.conn.commit()
|
||||||
|
@ -200,6 +200,11 @@ def get_components(template, mi, id, timefmt='%b %Y', length=250,
|
|||||||
template = re.sub(r'\{series_index[^}]*?\}', '', template)
|
template = re.sub(r'\{series_index[^}]*?\}', '', template)
|
||||||
if mi.rating is not None:
|
if mi.rating is not None:
|
||||||
format_args['rating'] = mi.format_rating(divide_by=2.0)
|
format_args['rating'] = mi.format_rating(divide_by=2.0)
|
||||||
|
if mi.identifiers:
|
||||||
|
format_args['identifiers'] = mi.format_field_extended('identifiers')[1]
|
||||||
|
else:
|
||||||
|
format_args['identifiers'] = ''
|
||||||
|
|
||||||
if hasattr(mi.timestamp, 'timetuple'):
|
if hasattr(mi.timestamp, 'timetuple'):
|
||||||
format_args['timestamp'] = strftime(timefmt, mi.timestamp.timetuple())
|
format_args['timestamp'] = strftime(timefmt, mi.timestamp.timetuple())
|
||||||
if hasattr(mi.pubdate, 'timetuple'):
|
if hasattr(mi.pubdate, 'timetuple'):
|
||||||
|
@ -37,11 +37,6 @@ def test_freetype():
|
|||||||
test()
|
test()
|
||||||
print ('FreeType OK!')
|
print ('FreeType OK!')
|
||||||
|
|
||||||
def test_sfntly():
|
|
||||||
from calibre.utils.fonts.subset import test
|
|
||||||
test()
|
|
||||||
print ('sfntly OK!')
|
|
||||||
|
|
||||||
def test_winutil():
|
def test_winutil():
|
||||||
from calibre.devices.scanner import win_pnp_drives
|
from calibre.devices.scanner import win_pnp_drives
|
||||||
matches = win_pnp_drives.scanner()
|
matches = win_pnp_drives.scanner()
|
||||||
@ -120,7 +115,6 @@ def test():
|
|||||||
test_plugins()
|
test_plugins()
|
||||||
test_lxml()
|
test_lxml()
|
||||||
test_freetype()
|
test_freetype()
|
||||||
test_sfntly()
|
|
||||||
test_sqlite()
|
test_sqlite()
|
||||||
test_imaging()
|
test_imaging()
|
||||||
test_unrar()
|
test_unrar()
|
||||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user