mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
0.9.27
This commit is contained in:
commit
78c50c3d4c
@ -1,4 +1,4 @@
|
||||
# vim:fileencoding=UTF-8:ts=2:sw=2:sta:et:sts=2:ai
|
||||
# vim:fileencoding=utf-8:ts=2:sw=2:sta:et:sts=2:ai
|
||||
# Each release can have new features and bug fixes. Each of which
|
||||
# must have a title and can optionally have linked tickets and a description.
|
||||
# In addition they can have a type field which defaults to minor, but should be major
|
||||
@ -20,6 +20,66 @@
|
||||
# new recipes:
|
||||
# - title:
|
||||
|
||||
- version: 0.9.27
|
||||
date: 2013-04-12
|
||||
|
||||
new features:
|
||||
- title: "Metadata download: Add two new sources for covers: Google Image Search and bigbooksearch.com."
|
||||
description: "To enable them go to Preferences->Metadata download and enable the 'Google Image' and 'Big Book Search' sources. Google Images is useful for finding larger covers as well as alternate versions of the cover. Big Book Search searches for alternate covers from amazon.com. It can occasionally find nicer covers than the direct Amazon source. Note that both these sources download multiple covers for a single book. Some of these covers can be wrong (i.e. they may be of a different book or not covers at all, so you should inspect the results and manually pick the best match). When bulk downloading, these sources are only used if the other sources find no covers."
|
||||
type: major
|
||||
|
||||
- title: "Content server: Allow specifying a restriction to use for the server when embedding it as a WSGI app."
|
||||
tickets: [1167951]
|
||||
|
||||
- title: "Get Books: Add a plugin for the Koobe Polish book store"
|
||||
|
||||
- title: "calibredb add_format: Add an option to not replace existing formats. Also pep8 compliance."
|
||||
|
||||
- title: "Allow restoring of the ORIGINAL_XXX format by right-clicking it in the book details panel"
|
||||
|
||||
bug fixes:
|
||||
- title: "AZW3 Input: Do not fail to identify JPEG images with 8BIM headers created with Adobe Photoshop."
|
||||
tickets: [1167985]
|
||||
|
||||
- title: "Amazon metadata download: Ignore Spanish edition entries when searching for a book on amazon.com"
|
||||
|
||||
- title: "TXT Input: When converting a txt file with a Byte Order Mark, remove the Byte Order Mark before further processing as it can cause the first line of the text to be mis-interpreted."
|
||||
|
||||
- title: "Get Books: Fix searching for current book/title/author by right clicking the get books icon"
|
||||
|
||||
- title: "Get Books: Update nexto, gutenberg, and virtualo store plugins for website changes"
|
||||
|
||||
- title: "Amazon metadata download: When downloading from amazon.co.jp handle the 'Black curtain redirect' for adult titles."
|
||||
tickets: [1165628]
|
||||
|
||||
- title: "When extracting zip files do not allow maliciously created zip files to overwrite other files on the system"
|
||||
|
||||
- title: "RTF Input: Handle RTF files with invalid border style specifications"
|
||||
tickets: [1021270]
|
||||
|
||||
improved recipes:
|
||||
- The Escapist
|
||||
- San Francisco Chronicle
|
||||
- The Onion
|
||||
- Fronda
|
||||
- Tom's Hardware
|
||||
- New Yorker
|
||||
- Financial Times UK
|
||||
- Business Week Magazine
|
||||
- Victoria Times
|
||||
- tvxs
|
||||
- The Independent
|
||||
|
||||
new recipes:
|
||||
- title: Economia
|
||||
author: Manish Bhattarai
|
||||
|
||||
- title: Universe Today
|
||||
author: seird
|
||||
|
||||
- title: The Galaxy's Edge
|
||||
author: Krittika Goyal
|
||||
|
||||
- version: 0.9.26
|
||||
date: 2013-04-05
|
||||
|
||||
|
@ -436,8 +436,8 @@ generate a Table of Contents in the converted ebook, based on the actual content
|
||||
|
||||
.. note:: Using these options can be a little challenging to get exactly right.
|
||||
If you prefer creating/editing the Table of Contents by hand, convert to
|
||||
the EPUB or AZW3 formats and select the checkbox at the bottom of the
|
||||
screen that says
|
||||
the EPUB or AZW3 formats and select the checkbox at the bottom of the Table
|
||||
of Contents section of the conversion dialog that says
|
||||
:guilabel:`Manually fine-tune the Table of Contents after conversion`.
|
||||
This will launch the ToC Editor tool after the conversion. It allows you to
|
||||
create entries in the Table of Contents by simply clicking the place in the
|
||||
|
@ -647,12 +647,17 @@ computers. Run |app| on a single computer and access it via the Content Server
|
||||
or a Remote Desktop solution.
|
||||
|
||||
If you must share the actual library, use a file syncing tool like
|
||||
DropBox or rsync or Microsoft SkyDrive instead of a networked drive. Even with
|
||||
these tools there is danger of data corruption/loss, so only do this if you are
|
||||
willing to live with that risk. In particular, be aware that **Google Drive**
|
||||
is incompatible with |app|, if you put your |app| library in Google Drive, you
|
||||
*will* suffer data loss. See
|
||||
`this thread <http://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
|
||||
DropBox or rsync or Microsoft SkyDrive instead of a networked drive. If you are
|
||||
using a file-syncing tool it is **essential** that you make sure that both
|
||||
|app| and the file syncing tool do not try to access the |app| library at the
|
||||
same time. In other words, **do not** run the file syncing tool and |app| at
|
||||
the same time.
|
||||
|
||||
Even with these tools there is danger of data corruption/loss, so only do this
|
||||
if you are willing to live with that risk. In particular, be aware that
|
||||
**Google Drive** is incompatible with |app|, if you put your |app| library in
|
||||
Google Drive, **you will suffer data loss**. See `this thread
|
||||
<http://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
|
||||
|
||||
Content From The Web
|
||||
---------------------
|
||||
@ -797,6 +802,12 @@ Downloading from the Internet can sometimes result in a corrupted download. If t
|
||||
* Try temporarily disabling your antivirus program (Microsoft Security Essentials, or Kaspersky or Norton or McAfee or whatever). This is most likely the culprit if the upgrade process is hanging in the middle.
|
||||
* Try rebooting your computer and running a registry cleaner like `Wise registry cleaner <http://www.wisecleaner.com>`_.
|
||||
* Try downloading the installer with an alternate browser. For example if you are using Internet Explorer, try using Firefox or Chrome instead.
|
||||
* If you get an error about a missing DLL on windows, then most likely, the
|
||||
permissions on your temporary folder are incorrect. Go to the folder
|
||||
:file:`C:\\Users\\USERNAME\\AppData\\Local` in Windows explorer and then
|
||||
right click on the :file:`Temp` folder and select :guilabel:`Properties` and go to
|
||||
the :guilabel:`Security` tab. Make sure that your user account has full control
|
||||
for this folder.
|
||||
|
||||
If you still cannot get the installer to work and you are on windows, you can use the `calibre portable install <http://calibre-ebook.com/download_portable>`_, which does not need an installer (it is just a zip file).
|
||||
|
||||
|
@ -91,7 +91,11 @@ First, we have to create a WSGI *adapter* for the calibre content server. Here i
|
||||
# Path to the calibre library to be served
|
||||
# The server process must have write permission for all files/dirs
|
||||
# in this directory or BAD things will happen
|
||||
path_to_library='/home/kovid/documents/demo library'
|
||||
path_to_library='/home/kovid/documents/demo library',
|
||||
|
||||
# The virtual library (restriction) to be used when serving this
|
||||
# library.
|
||||
virtual_library=None
|
||||
)
|
||||
|
||||
del create_wsgi_app
|
||||
|
@ -1,3 +1,4 @@
|
||||
import re
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
from collections import OrderedDict
|
||||
|
||||
@ -39,7 +40,7 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
title=self.tag_to_string(div.a).strip()
|
||||
url=div.a['href']
|
||||
soup0 = self.index_to_soup(url)
|
||||
urlprint=soup0.find('li', attrs={'class':'print tracked'}).a['href']
|
||||
urlprint=soup0.find('a', attrs={'href':re.compile('.*printer.*')})['href']
|
||||
articles.append({'title':title, 'url':urlprint, 'description':'', 'date':''})
|
||||
|
||||
|
||||
@ -56,7 +57,7 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
title=self.tag_to_string(div.a).strip()
|
||||
url=div.a['href']
|
||||
soup0 = self.index_to_soup(url)
|
||||
urlprint=soup0.find('li', attrs={'class':'print tracked'}).a['href']
|
||||
urlprint=soup0.find('a', attrs={'href':re.compile('.*printer.*')})['href']
|
||||
articles.append({'title':title, 'url':urlprint, 'description':desc, 'date':''})
|
||||
|
||||
if articles:
|
||||
|
17
recipes/economia.recipe
Normal file
17
recipes/economia.recipe
Normal file
@ -0,0 +1,17 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class AdvancedUserRecipe1314326622(BasicNewsRecipe):
|
||||
title = u'Economia'
|
||||
__author__ = 'Manish Bhattarai'
|
||||
description = 'Economia - Intelligence & Insight for ICAEW Members'
|
||||
language = 'en_GB'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 25
|
||||
masthead_url = 'http://economia.icaew.com/~/media/Images/Design%20Images/Economia_Red_website.ashx'
|
||||
cover_url = 'http://economia.icaew.com/~/media/Images/Design%20Images/Economia_Red_website.ashx'
|
||||
no_stylesheets = True
|
||||
remove_empty_feeds = True
|
||||
remove_tags_before = dict(id='content')
|
||||
remove_tags_after = dict(id='stars-wrapper')
|
||||
remove_tags = [dict(attrs={'class':['floatR', 'sharethis', 'rating clearfix']})]
|
||||
feeds = [(u'News', u'http://feedity.com/icaew-com/VlNTVFRa.rss'),(u'Business', u'http://feedity.com/icaew-com/VlNTVFtS.rss'),(u'People', u'http://feedity.com/icaew-com/VlNTVFtX.rss'),(u'Opinion', u'http://feedity.com/icaew-com/VlNTVFtW.rss'),(u'Finance', u'http://feedity.com/icaew-com/VlNTVFtV.rss')]
|
@ -110,10 +110,12 @@ class FinancialTimes(BasicNewsRecipe):
|
||||
soup = self.index_to_soup(self.INDEX)
|
||||
#dates= self.tag_to_string(soup.find('div', attrs={'class':'btm-links'}).find('div'))
|
||||
#self.timefmt = ' [%s]'%dates
|
||||
section_title = 'Untitled'
|
||||
|
||||
for column in soup.findAll('div', attrs = {'class':'feedBoxes clearfix'}):
|
||||
for section in column. findAll('div', attrs = {'class':'feedBox'}):
|
||||
section_title=self.tag_to_string(section.find('h4'))
|
||||
sectiontitle=self.tag_to_string(section.find('h4'))
|
||||
if '...' not in sectiontitle: section_title=sectiontitle
|
||||
for article in section.ul.findAll('li'):
|
||||
articles = []
|
||||
title=self.tag_to_string(article.a)
|
||||
|
@ -6,6 +6,7 @@ __copyright__ = u'2010-2013, Tomasz Dlugosz <tomek3d@gmail.com>'
|
||||
fronda.pl
|
||||
'''
|
||||
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from datetime import timedelta, date
|
||||
|
||||
@ -23,6 +24,7 @@ class Fronda(BasicNewsRecipe):
|
||||
extra_css = '''
|
||||
h1 {font-size:150%}
|
||||
.body {text-align:left;}
|
||||
div#featured-image {font-style:italic; font-size:70%}
|
||||
'''
|
||||
|
||||
earliest_date = date.today() - timedelta(days=oldest_article)
|
||||
@ -55,7 +57,10 @@ class Fronda(BasicNewsRecipe):
|
||||
articles = {}
|
||||
|
||||
for url, genName in genres:
|
||||
try:
|
||||
soup = self.index_to_soup('http://www.fronda.pl/c/'+ url)
|
||||
except:
|
||||
continue
|
||||
articles[genName] = []
|
||||
for item in soup.findAll('li'):
|
||||
article_h = item.find('h2')
|
||||
@ -77,16 +82,15 @@ class Fronda(BasicNewsRecipe):
|
||||
]
|
||||
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'class':['related-articles',
|
||||
'button right',
|
||||
'pagination']}),
|
||||
dict(name='div', attrs={'class':['related-articles','button right','pagination','related-articles content']}),
|
||||
dict(name='h3', attrs={'class':'block-header article comments'}),
|
||||
dict(name='ul', attrs={'class':'comment-list'}),
|
||||
dict(name='ul', attrs={'class':'category'}),
|
||||
dict(name='ul', attrs={'class':'tag-list'}),
|
||||
dict(name='ul', attrs={'class':['comment-list','category','tag-list']}),
|
||||
dict(name='p', attrs={'id':'comments-disclaimer'}),
|
||||
dict(name='div', attrs={'style':'text-align: left; margin-bottom: 15px;'}),
|
||||
dict(name='div', attrs={'style':'text-align: left; margin-top: 15px; margin-bottom: 30px;'}),
|
||||
dict(name='div', attrs={'class':'related-articles content'}),
|
||||
dict(name='div', attrs={'id':'comment-form'})
|
||||
dict(name='div', attrs={'id':'comment-form'}),
|
||||
dict(name='span', attrs={'class':'separator'})
|
||||
]
|
||||
|
||||
preprocess_regexps = [
|
||||
(re.compile(r'komentarzy: .*?</h6>', re.IGNORECASE | re.DOTALL | re.M ), lambda match: '</h6>')]
|
||||
|
108
recipes/galaxys_edge.recipe
Normal file
108
recipes/galaxys_edge.recipe
Normal file
@ -0,0 +1,108 @@
|
||||
from __future__ import with_statement
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2009, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class GalaxyEdge(BasicNewsRecipe):
|
||||
title = u'The Galaxy\'s Edge'
|
||||
language = 'en'
|
||||
|
||||
oldest_article = 7
|
||||
__author__ = 'Krittika Goyal'
|
||||
no_stylesheets = True
|
||||
|
||||
auto_cleanup = True
|
||||
|
||||
#keep_only_tags = [dict(id='content')]
|
||||
#remove_tags = [dict(attrs={'class':['article-links', 'breadcr']}),
|
||||
#dict(id=['email-section', 'right-column', 'printfooter', 'topover',
|
||||
#'slidebox', 'th_footer'])]
|
||||
|
||||
extra_css = '.photo-caption { font-size: smaller }'
|
||||
|
||||
def parse_index(self):
|
||||
soup = self.index_to_soup('http://www.galaxysedge.com/')
|
||||
main = soup.find('table', attrs={'width':'911'})
|
||||
toc = main.find('td', attrs={'width':'225'})
|
||||
|
||||
|
||||
|
||||
current_section = None
|
||||
current_articles = []
|
||||
feeds = []
|
||||
c = 0
|
||||
for x in toc.findAll(['p']):
|
||||
c = c+1
|
||||
if c == 5:
|
||||
if current_articles and current_section:
|
||||
feeds.append((current_section, current_articles))
|
||||
edwo = x.find('a')
|
||||
current_section = self.tag_to_string(edwo)
|
||||
current_articles = []
|
||||
self.log('\tFound section:', current_section)
|
||||
title = self.tag_to_string(edwo)
|
||||
url = edwo.get('href', True)
|
||||
url = 'http://www.galaxysedge.com/'+url
|
||||
print(title)
|
||||
print(c)
|
||||
if not url or not title:
|
||||
continue
|
||||
self.log('\t\tFound article:', title)
|
||||
self.log('\t\t\t', url)
|
||||
current_articles.append({'title': title, 'url':url,
|
||||
'description':'', 'date':''})
|
||||
elif c>5:
|
||||
current_section = self.tag_to_string(x.find('b'))
|
||||
current_articles = []
|
||||
self.log('\tFound section:', current_section)
|
||||
for y in x.findAll('a'):
|
||||
title = self.tag_to_string(y)
|
||||
url = y.get('href', True)
|
||||
url = 'http://www.galaxysedge.com/'+url
|
||||
print(title)
|
||||
if not url or not title:
|
||||
continue
|
||||
self.log('\t\tFound article:', title)
|
||||
self.log('\t\t\t', url)
|
||||
current_articles.append({'title': title, 'url':url,
|
||||
'description':'', 'date':''})
|
||||
if current_articles and current_section:
|
||||
feeds.append((current_section, current_articles))
|
||||
|
||||
return feeds
|
||||
|
||||
|
||||
|
||||
|
||||
#def preprocess_raw_html(self, raw, url):
|
||||
#return raw.replace('<body><p>', '<p>').replace('</p></body>', '</p>')
|
||||
|
||||
#def postprocess_html(self, soup, first_fetch):
|
||||
#for t in soup.findAll(['table', 'tr', 'td','center']):
|
||||
#t.name = 'div'
|
||||
#return soup
|
||||
|
||||
#def parse_index(self):
|
||||
#today = time.strftime('%Y-%m-%d')
|
||||
#soup = self.index_to_soup(
|
||||
#'http://www.thehindu.com/todays-paper/tp-index/?date=' + today)
|
||||
#div = soup.find(id='left-column')
|
||||
#feeds = []
|
||||
#current_section = None
|
||||
#current_articles = []
|
||||
#for x in div.findAll(['h3', 'div']):
|
||||
#if current_section and x.get('class', '') == 'tpaper':
|
||||
#a = x.find('a', href=True)
|
||||
#if a is not None:
|
||||
#current_articles.append({'url':a['href']+'?css=print',
|
||||
#'title':self.tag_to_string(a), 'date': '',
|
||||
#'description':''})
|
||||
#if x.name == 'h3':
|
||||
#if current_section and current_articles:
|
||||
#feeds.append((current_section, current_articles))
|
||||
#current_section = self.tag_to_string(x)
|
||||
#current_articles = []
|
||||
#return feeds
|
||||
|
||||
|
BIN
recipes/icons/newsweek_polska.png
Normal file
BIN
recipes/icons/newsweek_polska.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 905 B |
@ -41,6 +41,7 @@ class TheIndependentNew(BasicNewsRecipe):
|
||||
publication_type = 'newspaper'
|
||||
masthead_url = 'http://www.independent.co.uk/independent.co.uk/editorial/logo/independent_Masthead.png'
|
||||
encoding = 'utf-8'
|
||||
compress_news_images = True
|
||||
remove_tags =[
|
||||
dict(attrs={'id' : ['RelatedArtTag','renderBiography']}),
|
||||
dict(attrs={'class' : ['autoplay','openBiogPopup']}),
|
||||
|
@ -1,64 +1,44 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2008-2013, Darko Miletic <darko.miletic at gmail.com>'
|
||||
'''
|
||||
newyorker.com
|
||||
'''
|
||||
|
||||
'''
|
||||
www.canada.com
|
||||
'''
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
from calibre.ebooks.BeautifulSoup import BeautifulStoneSoup
|
||||
|
||||
class NewYorker(BasicNewsRecipe):
|
||||
title = 'The New Yorker'
|
||||
__author__ = 'Darko Miletic'
|
||||
description = 'The best of US journalism'
|
||||
oldest_article = 15
|
||||
|
||||
|
||||
title = u'New Yorker Magazine'
|
||||
newyorker_prefix = 'http://m.newyorker.com'
|
||||
description = u'Content from the New Yorker website'
|
||||
fp_tag = 'CAN_TC'
|
||||
|
||||
masthead_url = 'http://www.newyorker.com/images/elements/print/newyorker_printlogo.gif'
|
||||
|
||||
compress_news_images = True
|
||||
compress_news_images_auto_size = 8
|
||||
scale_news_images_to_device = False
|
||||
scale_news_images = (768, 1024)
|
||||
|
||||
url_list = []
|
||||
language = 'en'
|
||||
max_articles_per_feed = 100
|
||||
__author__ = 'Nick Redding'
|
||||
no_stylesheets = True
|
||||
use_embedded_content = False
|
||||
publisher = 'Conde Nast Publications'
|
||||
category = 'news, politics, USA'
|
||||
encoding = 'cp1252'
|
||||
publication_type = 'magazine'
|
||||
masthead_url = 'http://www.newyorker.com/css/i/hed/logo.gif'
|
||||
extra_css = """
|
||||
body {font-family: "Times New Roman",Times,serif}
|
||||
.articleauthor{color: #9F9F9F;
|
||||
font-family: Arial, sans-serif;
|
||||
font-size: small;
|
||||
text-transform: uppercase}
|
||||
.rubric,.dd,h6#credit{color: #CD0021;
|
||||
font-family: Arial, sans-serif;
|
||||
font-size: small;
|
||||
text-transform: uppercase}
|
||||
.descender:first-letter{display: inline; font-size: xx-large; font-weight: bold}
|
||||
.dd,h6#credit{color: gray}
|
||||
.c{display: block}
|
||||
.caption,h2#articleintro{font-style: italic}
|
||||
.caption{font-size: small}
|
||||
"""
|
||||
timefmt = ' [%b %d]'
|
||||
encoding = 'utf-8'
|
||||
extra_css = '''
|
||||
.byline { font-size:xx-small; font-weight: bold;}
|
||||
h3 { margin-bottom: 6px; }
|
||||
.caption { font-size: xx-small; font-style: italic; font-weight: normal; }
|
||||
'''
|
||||
keep_only_tags = [dict(name='div', attrs={'id':re.compile('pagebody')})]
|
||||
|
||||
conversion_options = {
|
||||
'comment' : description
|
||||
, 'tags' : category
|
||||
, 'publisher' : publisher
|
||||
, 'language' : language
|
||||
}
|
||||
|
||||
keep_only_tags = [dict(name='div', attrs={'id':'pagebody'})]
|
||||
remove_tags = [
|
||||
dict(name=['meta','iframe','base','link','embed','object'])
|
||||
,dict(attrs={'class':['utils','socialUtils','articleRailLinks','icons','social-utils-top','entry-keywords','entry-categories','utilsPrintEmail'] })
|
||||
,dict(attrs={'id':['show-header','show-footer'] })
|
||||
]
|
||||
remove_tags_after = dict(attrs={'class':'entry-content'})
|
||||
remove_attributes = ['lang']
|
||||
feeds = [(u'The New Yorker', u'http://www.newyorker.com/services/mrss/feeds/everything.xml')]
|
||||
|
||||
def print_version(self, url):
|
||||
return url + '?printable=true¤tPage=all'
|
||||
|
||||
def image_url_processor(self, baseurl, url):
|
||||
return url.strip()
|
||||
remove_tags = [{'class':'socialUtils'},{'class':'entry-keywords'}]
|
||||
|
||||
def get_cover_url(self):
|
||||
cover_url = "http://www.newyorker.com/images/covers/1925/1925_02_21_p233.jpg"
|
||||
@ -68,13 +48,233 @@ class NewYorker(BasicNewsRecipe):
|
||||
cover_url = 'http://www.newyorker.com' + cover_item.div.img['src'].strip()
|
||||
return cover_url
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
for item in soup.findAll(style=True):
|
||||
del item['style']
|
||||
auth = soup.find(attrs={'id':'articleauthor'})
|
||||
if auth:
|
||||
alink = auth.find('a')
|
||||
if alink and alink.string is not None:
|
||||
txt = alink.string
|
||||
alink.replaceWith(txt)
|
||||
def fixChars(self,string):
|
||||
# Replace lsquo (\x91)
|
||||
fixed = re.sub("\x91","‘",string)
|
||||
# Replace rsquo (\x92)
|
||||
fixed = re.sub("\x92","’",fixed)
|
||||
# Replace ldquo (\x93)
|
||||
fixed = re.sub("\x93","“",fixed)
|
||||
# Replace rdquo (\x94)
|
||||
fixed = re.sub("\x94","”",fixed)
|
||||
# Replace ndash (\x96)
|
||||
fixed = re.sub("\x96","–",fixed)
|
||||
# Replace mdash (\x97)
|
||||
fixed = re.sub("\x97","—",fixed)
|
||||
fixed = re.sub("’","’",fixed)
|
||||
return fixed
|
||||
|
||||
def massageNCXText(self, description):
|
||||
# Kindle TOC descriptions won't render certain characters
|
||||
if description:
|
||||
massaged = unicode(BeautifulStoneSoup(description, convertEntities=BeautifulStoneSoup.HTML_ENTITIES))
|
||||
# Replace '&' with '&'
|
||||
massaged = re.sub("&","&", massaged)
|
||||
return self.fixChars(massaged)
|
||||
else:
|
||||
return description
|
||||
|
||||
def populate_article_metadata(self, article, soup, first):
|
||||
if first:
|
||||
picdiv = soup.find('body').find('img')
|
||||
if picdiv is not None:
|
||||
self.add_toc_thumbnail(article,re.sub(r'links\\link\d+\\','',picdiv['src']))
|
||||
xtitle = article.text_summary.strip()
|
||||
if len(xtitle) == 0:
|
||||
desc = soup.find('meta',attrs={'property':'og:description'})
|
||||
if desc is not None:
|
||||
article.summary = article.text_summary = desc['content']
|
||||
shortparagraph = ""
|
||||
## try:
|
||||
if len(article.text_summary.strip()) == 0:
|
||||
articlebodies = soup.findAll('div',attrs={'class':'entry-content'})
|
||||
if articlebodies:
|
||||
for articlebody in articlebodies:
|
||||
if articlebody:
|
||||
paras = articlebody.findAll('p')
|
||||
for p in paras:
|
||||
refparagraph = self.massageNCXText(self.tag_to_string(p,use_alt=False)).strip()
|
||||
#account for blank paragraphs and short paragraphs by appending them to longer ones
|
||||
if len(refparagraph) > 0:
|
||||
if len(refparagraph) > 70: #approximately one line of text
|
||||
newpara = shortparagraph + refparagraph
|
||||
article.summary = article.text_summary = newpara.strip()
|
||||
return
|
||||
else:
|
||||
shortparagraph = refparagraph + " "
|
||||
if shortparagraph.strip().find(" ") == -1 and not shortparagraph.strip().endswith(":"):
|
||||
shortparagraph = shortparagraph + "- "
|
||||
else:
|
||||
article.summary = article.text_summary = self.massageNCXText(article.text_summary)
|
||||
## except:
|
||||
## self.log("Error creating article descriptions")
|
||||
## return
|
||||
|
||||
|
||||
def strip_anchors(self,soup):
|
||||
paras = soup.findAll(True)
|
||||
for para in paras:
|
||||
aTags = para.findAll('a')
|
||||
for a in aTags:
|
||||
if a.img is None:
|
||||
a.replaceWith(a.renderContents().decode('cp1252','replace'))
|
||||
return soup
|
||||
|
||||
def preprocess_html(self,soup):
|
||||
dateline = soup.find('div','published')
|
||||
byline = soup.find('div','byline')
|
||||
title = soup.find('h1','entry-title')
|
||||
if title is None:
|
||||
return self.strip_anchors(soup)
|
||||
if byline is None:
|
||||
title.append(dateline)
|
||||
return self.strip_anchors(soup)
|
||||
byline.append(dateline)
|
||||
return self.strip_anchors(soup)
|
||||
|
||||
def load_global_nav(self,soup):
|
||||
seclist = []
|
||||
ul = soup.find('ul',attrs={'id':re.compile('global-nav-menu')})
|
||||
if ul is not None:
|
||||
for li in ul.findAll('li'):
|
||||
if li.a is not None:
|
||||
securl = li.a['href']
|
||||
if securl != '/' and securl != '/magazine' and securl.startswith('/'):
|
||||
seclist.append((self.tag_to_string(li.a),self.newyorker_prefix+securl))
|
||||
return seclist
|
||||
|
||||
def exclude_url(self,url):
|
||||
if url in self.url_list:
|
||||
return True
|
||||
if not url.endswith('html'):
|
||||
return True
|
||||
if 'goings-on-about-town-app' in url:
|
||||
return True
|
||||
if 'something-to-be-thankful-for' in url:
|
||||
return True
|
||||
if '/shouts/' in url:
|
||||
return True
|
||||
if 'out-loud' in url:
|
||||
return True
|
||||
if '/rss/' in url:
|
||||
return True
|
||||
if '/video-' in url:
|
||||
return True
|
||||
self.url_list.append(url)
|
||||
return False
|
||||
|
||||
|
||||
def load_index_page(self,soup):
|
||||
article_list = []
|
||||
for div in soup.findAll('div',attrs={'class':re.compile('^rotator')}):
|
||||
h2 = div.h2
|
||||
if h2 is not None:
|
||||
a = h2.a
|
||||
if a is not None:
|
||||
url = a['href']
|
||||
if not self.exclude_url(url):
|
||||
if url.startswith('/'):
|
||||
url = self.newyorker_prefix+url
|
||||
byline = h2.span
|
||||
if byline is not None:
|
||||
author = self.tag_to_string(byline)
|
||||
if author.startswith('by '):
|
||||
author.replace('by ','')
|
||||
byline.extract()
|
||||
else:
|
||||
author = ''
|
||||
if h2.br is not None:
|
||||
h2.br.replaceWith(' ')
|
||||
title = self.tag_to_string(h2)
|
||||
desc = div.find(attrs={'class':['rotator-ad-body','feature-blurb-text']})
|
||||
if desc is not None:
|
||||
description = self.tag_to_string(desc)
|
||||
else:
|
||||
description = ''
|
||||
article_list.append(dict(title=title,url=url,date='',description=description,author=author,content=''))
|
||||
ul = div.find('ul','feature-blurb-links')
|
||||
if ul is not None:
|
||||
for li in ul.findAll('li'):
|
||||
a = li.a
|
||||
if a is not None:
|
||||
url = a['href']
|
||||
if not self.exclude_url(url):
|
||||
if url.startswith('/'):
|
||||
url = self.newyorker_prefix+url
|
||||
if a.br is not None:
|
||||
a.br.replaceWith(' ')
|
||||
title = '>>'+self.tag_to_string(a)
|
||||
article_list.append(dict(title=title,url=url,date='',description='',author='',content=''))
|
||||
for h3 in soup.findAll('h3','header'):
|
||||
a = h3.a
|
||||
if a is not None:
|
||||
url = a['href']
|
||||
if not self.exclude_url(url):
|
||||
if url.startswith('/'):
|
||||
url = self.newyorker_prefix+url
|
||||
byline = h3.span
|
||||
if byline is not None:
|
||||
author = self.tag_to_string(byline)
|
||||
if author.startswith('by '):
|
||||
author = author.replace('by ','')
|
||||
byline.extract()
|
||||
else:
|
||||
author = ''
|
||||
if h3.br is not None:
|
||||
h3.br.replaceWith(' ')
|
||||
title = self.tag_to_string(h3).strip()
|
||||
article_list.append(dict(title=title,url=url,date='',description='',author=author,content=''))
|
||||
return article_list
|
||||
|
||||
def load_global_section(self,securl):
|
||||
article_list = []
|
||||
try:
|
||||
soup = self.index_to_soup(securl)
|
||||
except:
|
||||
return article_list
|
||||
if '/blogs/' not in securl:
|
||||
return self.load_index_page(soup)
|
||||
for div in soup.findAll('div',attrs={'id':re.compile('^entry')}):
|
||||
h3 = div.h3
|
||||
if h3 is not None:
|
||||
a = h3.a
|
||||
if a is not None:
|
||||
url = a['href']
|
||||
if not self.exclude_url(url):
|
||||
if url.startswith('/'):
|
||||
url = self.newyorker_prefix+url
|
||||
if h3.br is not None:
|
||||
h3.br.replaceWith(' ')
|
||||
title = self.tag_to_string(h3)
|
||||
article_list.append(dict(title=title,url=url,date='',description='',author='',content=''))
|
||||
return article_list
|
||||
|
||||
def filter_ans(self, ans) :
|
||||
total_article_count = 0
|
||||
idx = 0
|
||||
idx_max = len(ans)-1
|
||||
while idx <= idx_max:
|
||||
if True: #self.verbose
|
||||
self.log("Section %s: %d articles" % (ans[idx][0], len(ans[idx][1])) )
|
||||
for article in ans[idx][1]:
|
||||
total_article_count += 1
|
||||
if True: #self.verbose
|
||||
self.log("\t%-40.40s... \t%-60.60s..." % (article['title'].encode('cp1252','replace'),
|
||||
article['url'].replace('http://m.newyorker.com','').encode('cp1252','replace')))
|
||||
idx = idx+1
|
||||
self.log( "Queued %d articles" % total_article_count )
|
||||
return ans
|
||||
|
||||
|
||||
def parse_index(self):
|
||||
ans = []
|
||||
try:
|
||||
soup = self.index_to_soup(self.newyorker_prefix)
|
||||
except:
|
||||
return ans
|
||||
seclist = self.load_global_nav(soup)
|
||||
ans.append(('Front Page',self.load_index_page(soup)))
|
||||
for (sectitle,securl) in seclist:
|
||||
ans.append((sectitle,self.load_global_section(securl)))
|
||||
return self.filter_ans(ans)
|
||||
|
||||
|
@ -7,7 +7,6 @@ sfgate.com
|
||||
'''
|
||||
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
import re
|
||||
|
||||
class SanFranciscoChronicle(BasicNewsRecipe):
|
||||
title = u'San Francisco Chronicle'
|
||||
@ -19,16 +18,7 @@ class SanFranciscoChronicle(BasicNewsRecipe):
|
||||
max_articles_per_feed = 100
|
||||
no_stylesheets = True
|
||||
use_embedded_content = False
|
||||
|
||||
|
||||
|
||||
remove_tags_before = {'id':'printheader'}
|
||||
|
||||
remove_tags = [
|
||||
dict(name='div',attrs={'id':'printheader'})
|
||||
,dict(name='a', attrs={'href':re.compile('http://ads\.pheedo\.com.*')})
|
||||
,dict(name='div',attrs={'id':'footer'})
|
||||
]
|
||||
auto_cleanup = True
|
||||
|
||||
extra_css = '''
|
||||
h1{font-family :Arial,Helvetica,sans-serif; font-size:large;}
|
||||
@ -43,33 +33,13 @@ class SanFranciscoChronicle(BasicNewsRecipe):
|
||||
'''
|
||||
|
||||
feeds = [
|
||||
(u'Top News Stories', u'http://www.sfgate.com/rss/feeds/news.xml')
|
||||
(u'Bay Area News', u'http://www.sfgate.com/bayarea/feed/Bay-Area-News-429.php'),
|
||||
(u'City Insider', u'http://www.sfgate.com/default/feed/City-Insider-Blog-573.php'),
|
||||
(u'Crime Scene', u'http://www.sfgate.com/rss/feed/Crime-Scene-Blog-599.php'),
|
||||
(u'Education News', u'http://www.sfgate.com/education/feed/Education-News-from-SFGate-430.php'),
|
||||
(u'National News', u'http://www.sfgate.com/rss/feed/National-News-RSS-Feed-435.php'),
|
||||
(u'Weird News', u'http://www.sfgate.com/weird/feed/Weird-News-RSS-Feed-433.php'),
|
||||
(u'World News', u'http://www.sfgate.com/rss/feed/World-News-From-SFGate-432.php'),
|
||||
]
|
||||
|
||||
def print_version(self,url):
|
||||
url= url +"&type=printable"
|
||||
return url
|
||||
|
||||
def get_article_url(self, article):
|
||||
print str(article['title_detail']['value'])
|
||||
url = article.get('guid',None)
|
||||
url = "http://www.sfgate.com/cgi-bin/article.cgi?f="+url
|
||||
if "Presented By:" in str(article['title_detail']['value']):
|
||||
url = ''
|
||||
return url
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,8 +1,11 @@
|
||||
#!/usr/bin/env python
|
||||
__license__ = 'GPL v3'
|
||||
__author__ = 'Lorenzo Vigentini'
|
||||
__copyright__ = '2009, Lorenzo Vigentini <l.vigentini at gmail.com>'
|
||||
description = 'the Escapist Magazine - v1.02 (09, January 2010)'
|
||||
__author__ = 'Lorenzo Vigentini and Tom Surace'
|
||||
__copyright__ = '2009, Lorenzo Vigentini <l.vigentini at gmail.com>, 2013 Tom Surace <tekhedd@byteheaven.net>'
|
||||
description = 'The Escapist Magazine - v1.3 (2013, April 2013)'
|
||||
|
||||
#
|
||||
# Based on 'the Escapist Magazine - v1.02 (09, January 2010)'
|
||||
|
||||
'''
|
||||
http://www.escapistmagazine.com/
|
||||
@ -11,12 +14,11 @@ http://www.escapistmagazine.com/
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class al(BasicNewsRecipe):
|
||||
author = 'Lorenzo Vigentini'
|
||||
author = 'Lorenzo Vigentini and Tom Surace'
|
||||
description = 'The Escapist Magazine'
|
||||
|
||||
cover_url = 'http://cdn.themis-media.com/themes/escapistmagazine/default/images/logo.png'
|
||||
title = u'The Escapist Magazine'
|
||||
publisher = 'Themis media'
|
||||
publisher = 'Themis Media'
|
||||
category = 'Video games news, lifestyle, gaming culture'
|
||||
|
||||
language = 'en'
|
||||
@ -36,18 +38,19 @@ class al(BasicNewsRecipe):
|
||||
]
|
||||
|
||||
def print_version(self,url):
|
||||
# Expect article url in the format:
|
||||
# http://www.escapistmagazine.com/news/view/123198-article-name?utm_source=rss&utm_medium=rss&utm_campaign=news
|
||||
#
|
||||
baseURL='http://www.escapistmagazine.com'
|
||||
segments = url.split('/')
|
||||
#basename = '/'.join(segments[:3]) + '/'
|
||||
subPath= '/'+ segments[3] + '/'
|
||||
articleURL=(segments[len(segments)-1])[0:5]
|
||||
|
||||
if articleURL[4] =='-':
|
||||
articleURL=articleURL[:4]
|
||||
# The article number is the "number" that starts the name
|
||||
articleNumber = segments[len(segments)-1]; # the "article name"
|
||||
articleNumber = articleNumber.split('-')[0]; # keep part before hyphen
|
||||
|
||||
printVerString='print/'+ articleURL
|
||||
s= baseURL + subPath + printVerString
|
||||
return s
|
||||
fullUrl = baseURL + subPath + 'print/' + articleNumber
|
||||
return fullUrl
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'id':'article'})
|
||||
|
@ -1,5 +1,5 @@
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2009-2011, Darko Miletic <darko.miletic at gmail.com>'
|
||||
__copyright__ = '2009-2013, Darko Miletic <darko.miletic at gmail.com>'
|
||||
|
||||
'''
|
||||
theonion.com
|
||||
@ -10,7 +10,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
|
||||
class TheOnion(BasicNewsRecipe):
|
||||
title = 'The Onion'
|
||||
__author__ = 'Darko Miletic'
|
||||
description = "America's finest news source"
|
||||
description = "The Onion, America's Finest News Source, is an award-winning publication covering world, national, and * local issues. It is updated daily online and distributed weekly in select American cities."
|
||||
oldest_article = 2
|
||||
max_articles_per_feed = 100
|
||||
publisher = 'Onion, Inc.'
|
||||
@ -20,7 +20,8 @@ class TheOnion(BasicNewsRecipe):
|
||||
use_embedded_content = False
|
||||
encoding = 'utf-8'
|
||||
publication_type = 'newsportal'
|
||||
masthead_url = 'http://o.onionstatic.com/img/headers/onion_190.png'
|
||||
needs_subscription = 'optional'
|
||||
masthead_url = 'http://www.theonion.com/static/onion/img/logo_1x.png'
|
||||
extra_css = """
|
||||
body{font-family: Helvetica,Arial,sans-serif}
|
||||
.section_title{color: gray; text-transform: uppercase}
|
||||
@ -37,17 +38,11 @@ class TheOnion(BasicNewsRecipe):
|
||||
, 'language' : language
|
||||
}
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='h2', attrs={'class':['section_title','title']})
|
||||
,dict(attrs={'class':['main_image','meta','article_photo_lead','article_body']})
|
||||
,dict(attrs={'id':['entries']})
|
||||
]
|
||||
keep_only_tags = [dict(attrs={'class':'full-article'})]
|
||||
remove_attributes = ['lang','rel']
|
||||
remove_tags_after = dict(attrs={'class':['article_body','feature_content']})
|
||||
remove_tags = [
|
||||
dict(name=['object','link','iframe','base','meta'])
|
||||
,dict(name='div', attrs={'class':['toolbar_side','graphical_feature','toolbar_bottom']})
|
||||
,dict(name='div', attrs={'id':['recent_slider','sidebar','pagination','related_media']})
|
||||
,dict(attrs={'class':lambda x: x and 'share-tools' in x.split()})
|
||||
]
|
||||
|
||||
|
||||
@ -56,6 +51,17 @@ class TheOnion(BasicNewsRecipe):
|
||||
,(u'Sports' , u'http://feeds.theonion.com/theonion/sports' )
|
||||
]
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser(self)
|
||||
br.open('http://www.theonion.com/')
|
||||
if self.username is not None and self.password is not None:
|
||||
br.open('https://ui.ppjol.com/login/onion/u/j_spring_security_check')
|
||||
br.select_form(name='f')
|
||||
br['j_username'] = self.username
|
||||
br['j_password'] = self.password
|
||||
br.submit()
|
||||
return br
|
||||
|
||||
def get_article_url(self, article):
|
||||
artl = BasicNewsRecipe.get_article_url(self, article)
|
||||
if artl.startswith('http://www.theonion.com/audio/'):
|
||||
@ -79,4 +85,8 @@ class TheOnion(BasicNewsRecipe):
|
||||
else:
|
||||
str = self.tag_to_string(item)
|
||||
item.replaceWith(str)
|
||||
for item in soup.findAll('img'):
|
||||
if item.has_key('data-src'):
|
||||
item['src'] = item['data-src']
|
||||
return soup
|
||||
|
||||
|
@ -1,7 +1,5 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2008-2009, Darko Miletic <darko.miletic at gmail.com>'
|
||||
__copyright__ = '2008-2013, Darko Miletic <darko.miletic at gmail.com>'
|
||||
'''
|
||||
tomshardware.com/us
|
||||
'''
|
||||
@ -16,21 +14,19 @@ class Tomshardware(BasicNewsRecipe):
|
||||
publisher = "Tom's Hardware"
|
||||
category = 'news, IT, hardware, USA'
|
||||
no_stylesheets = True
|
||||
needs_subscription = True
|
||||
needs_subscription = 'optional'
|
||||
language = 'en'
|
||||
|
||||
INDEX = 'http://www.tomshardware.com'
|
||||
LOGIN = INDEX + '/membres/'
|
||||
remove_javascript = True
|
||||
use_embedded_content= False
|
||||
|
||||
html2lrf_options = [
|
||||
'--comment', description
|
||||
, '--category', category
|
||||
, '--publisher', publisher
|
||||
]
|
||||
|
||||
html2epub_options = 'publisher="' + publisher + '"\ncomments="' + description + '"\ntags="' + category + '"'
|
||||
conversion_options = {
|
||||
'comment' : description
|
||||
, 'tags' : category
|
||||
, 'publisher' : publisher
|
||||
, 'language' : language
|
||||
}
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser(self)
|
||||
@ -50,8 +46,8 @@ class Tomshardware(BasicNewsRecipe):
|
||||
]
|
||||
|
||||
feeds = [
|
||||
(u'Latest Articles', u'http://www.tomshardware.com/feeds/atom/tom-s-hardware-us,18-2.xml' )
|
||||
,(u'Latest News' , u'http://www.tomshardware.com/feeds/atom/tom-s-hardware-us,18-1.xml')
|
||||
(u'Reviews', u'http://www.tomshardware.com/feeds/rss2/tom-s-hardware-us,18-2.xml')
|
||||
,(u'News' , u'http://www.tomshardware.com/feeds/rss2/tom-s-hardware-us,18-1.xml')
|
||||
]
|
||||
|
||||
def print_version(self, url):
|
||||
|
@ -1,5 +1,6 @@
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||
|
||||
import re
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
|
||||
class TVXS(BasicNewsRecipe):
|
||||
@ -8,19 +9,30 @@ class TVXS(BasicNewsRecipe):
|
||||
description = 'News from Greece'
|
||||
max_articles_per_feed = 100
|
||||
oldest_article = 3
|
||||
simultaneous_downloads = 1
|
||||
publisher = 'TVXS'
|
||||
category = 'news, GR'
|
||||
category = 'news, sport, greece'
|
||||
language = 'el'
|
||||
encoding = None
|
||||
use_embedded_content = False
|
||||
remove_empty_feeds = True
|
||||
#conversion_options = { 'linearize_tables': True}
|
||||
conversion_options = {'smarten_punctuation': True}
|
||||
no_stylesheets = True
|
||||
publication_type = 'newspaper'
|
||||
remove_tags_before = dict(name='h1',attrs={'class':'print-title'})
|
||||
remove_tags_after = dict(name='div',attrs={'class':'field field-type-relevant-content field-field-relevant-articles'})
|
||||
remove_attributes = ['width', 'src', 'header', 'footer']
|
||||
|
||||
remove_tags = [dict(name='div',attrs={'class':'field field-type-relevant-content field-field-relevant-articles'}),
|
||||
dict(name='div',attrs={'class':'field field-type-filefield field-field-image-gallery'}),
|
||||
dict(name='div',attrs={'class':'filefield-file'})]
|
||||
remove_attributes = ['border', 'cellspacing', 'align', 'cellpadding', 'colspan', 'valign', 'vspace', 'hspace', 'alt', 'width', 'height']
|
||||
extra_css = 'body { font-family: verdana, helvetica, sans-serif; } \
|
||||
table { width: 100%; } \
|
||||
td img { display: block; margin: 5px auto; } \
|
||||
ul { padding-top: 10px; } \
|
||||
ol { padding-top: 10px; } \
|
||||
li { padding-top: 5px; padding-bottom: 5px; } \
|
||||
h1 { text-align: center; font-size: 125%; font-weight: bold; } \
|
||||
h2, h3, h4, h5, h6 { text-align: center; font-size: 100%; font-weight: bold; }'
|
||||
preprocess_regexps = [(re.compile(r'<br[ ]*/>', re.IGNORECASE), lambda m: ''), (re.compile(r'<br[ ]*clear.*/>', re.IGNORECASE), lambda m: '')]
|
||||
|
||||
feeds = [(u'Ελλάδα', 'http://tvxs.gr/feeds/2/feed.xml'),
|
||||
(u'Κόσμος', 'http://tvxs.gr/feeds/5/feed.xml'),
|
||||
@ -35,17 +47,10 @@ class TVXS(BasicNewsRecipe):
|
||||
(u'Ιστορία', 'http://tvxs.gr/feeds/1573/feed.xml'),
|
||||
(u'Χιούμορ', 'http://tvxs.gr/feeds/692/feed.xml')]
|
||||
|
||||
|
||||
def print_version(self, url):
|
||||
import urllib2, urlparse, StringIO, gzip
|
||||
|
||||
fp = urllib2.urlopen(url)
|
||||
data = fp.read()
|
||||
if fp.info()['content-encoding'] == 'gzip':
|
||||
gzip_data = StringIO.StringIO(data)
|
||||
gzipper = gzip.GzipFile(fileobj=gzip_data)
|
||||
data = gzipper.read()
|
||||
fp.close()
|
||||
br = self.get_browser()
|
||||
response = br.open(url)
|
||||
data = response.read()
|
||||
|
||||
pos_1 = data.find('<a href="/print/')
|
||||
if pos_1 == -1:
|
||||
@ -57,5 +62,5 @@ class TVXS(BasicNewsRecipe):
|
||||
pos_1 += len('<a href="')
|
||||
new_url = data[pos_1:pos_2]
|
||||
|
||||
print_url = urlparse.urljoin(url, new_url)
|
||||
print_url = "http://tvxs.gr" + new_url
|
||||
return print_url
|
||||
|
17
recipes/universe_today.recipe
Normal file
17
recipes/universe_today.recipe
Normal file
@ -0,0 +1,17 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class UniverseToday(BasicNewsRecipe):
|
||||
title = u'Universe Today'
|
||||
language = 'en'
|
||||
description = u'Space and astronomy news.'
|
||||
__author__ = 'seird'
|
||||
publisher = u'universetoday.com'
|
||||
category = 'science, astronomy, news, rss'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 40
|
||||
auto_cleanup = True
|
||||
no_stylesheets = True
|
||||
use_embedded_content = False
|
||||
remove_empty_feeds = True
|
||||
|
||||
feeds = [(u'Universe Today', u'http://feeds.feedburner.com/universetoday/pYdq')]
|
@ -6,17 +6,62 @@ __license__ = 'GPL v3'
|
||||
www.canada.com
|
||||
'''
|
||||
import re
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
from calibre.ebooks.BeautifulSoup import Tag, BeautifulStoneSoup
|
||||
|
||||
|
||||
class TimesColonist(BasicNewsRecipe):
|
||||
|
||||
# Customization -- remove sections you don't want.
|
||||
# If your e-reader is an e-ink Kindle and your output profile is
|
||||
# set properly this recipe will not include images because the
|
||||
# resulting file is too large. If you have one of these and want
|
||||
# images you can set kindle_omit_images = False
|
||||
# and remove sections (typically the e-ink Kindles will
|
||||
# work with about a dozen of these, but your mileage may vary).
|
||||
|
||||
kindle_omit_images = True
|
||||
|
||||
section_list = [
|
||||
('','Web Front Page'),
|
||||
('news/','News Headlines'),
|
||||
('news/b-c/','BC News'),
|
||||
('news/national/','National News'),
|
||||
('news/world/','World News'),
|
||||
('opinion/','Opinion'),
|
||||
('opinion/letters/','Letters'),
|
||||
('business/','Business'),
|
||||
('business/money/','Money'),
|
||||
('business/technology/','Technology'),
|
||||
('business/working/','Working'),
|
||||
('sports/','Sports'),
|
||||
('sports/hockey/','Hockey'),
|
||||
('sports/football/','Football'),
|
||||
('sports/basketball/','Basketball'),
|
||||
('sports/golf/','Golf'),
|
||||
('entertainment/','entertainment'),
|
||||
('entertainment/go/','Go!'),
|
||||
('entertainment/music/','Music'),
|
||||
('entertainment/books/','Books'),
|
||||
('entertainment/Movies/','Movies'),
|
||||
('entertainment/television/','Television'),
|
||||
('life/','Life'),
|
||||
('life/health/','Health'),
|
||||
('life/travel/','Travel'),
|
||||
('life/driving/','Driving'),
|
||||
('life/homes/','Homes'),
|
||||
('life/food-drink/','Food & Drink')
|
||||
]
|
||||
|
||||
title = u'Victoria Times Colonist'
|
||||
url_prefix = 'http://www.timescolonist.com'
|
||||
description = u'News from Victoria, BC'
|
||||
fp_tag = 'CAN_TC'
|
||||
|
||||
masthead_url = 'http://www.timescolonist.com/gmg/img/global/logoTimesColonist.png'
|
||||
|
||||
|
||||
url_list = []
|
||||
language = 'en_CA'
|
||||
__author__ = 'Nick Redding'
|
||||
@ -29,15 +74,21 @@ class TimesColonist(BasicNewsRecipe):
|
||||
.caption { font-size: xx-small; font-style: italic; font-weight: normal; }
|
||||
'''
|
||||
keep_only_tags = [dict(name='div', attrs={'class':re.compile('main.content')})]
|
||||
remove_tags = [{'class':'comments'},
|
||||
|
||||
def __init__(self, options, log, progress_reporter):
|
||||
self.remove_tags = [{'class':'comments'},
|
||||
{'id':'photocredit'},
|
||||
dict(name='div', attrs={'class':re.compile('top.controls')}),
|
||||
dict(name='div', attrs={'class':re.compile('^comments')}),
|
||||
dict(name='div', attrs={'class':re.compile('social')}),
|
||||
dict(name='div', attrs={'class':re.compile('tools')}),
|
||||
dict(name='div', attrs={'class':re.compile('bottom.tools')}),
|
||||
dict(name='div', attrs={'class':re.compile('window')}),
|
||||
dict(name='div', attrs={'class':re.compile('related.news.element')})]
|
||||
|
||||
print("PROFILE NAME = "+options.output_profile.short_name)
|
||||
if self.kindle_omit_images and options.output_profile.short_name in ['kindle', 'kindle_dx', 'kindle_pw']:
|
||||
self.remove_tags.append(dict(name='div', attrs={'class':re.compile('image-container')}))
|
||||
BasicNewsRecipe.__init__(self, options, log, progress_reporter)
|
||||
|
||||
def get_cover_url(self):
|
||||
from datetime import timedelta, date
|
||||
@ -122,7 +173,6 @@ class TimesColonist(BasicNewsRecipe):
|
||||
def preprocess_html(self,soup):
|
||||
byline = soup.find('p',attrs={'class':re.compile('ancillary')})
|
||||
if byline is not None:
|
||||
byline.find('a')
|
||||
authstr = self.tag_to_string(byline,False)
|
||||
authstr = re.sub('/ *Times Colonist','/',authstr, flags=re.IGNORECASE)
|
||||
authstr = re.sub('BY */','',authstr, flags=re.IGNORECASE)
|
||||
@ -149,9 +199,10 @@ class TimesColonist(BasicNewsRecipe):
|
||||
atag = htag.a
|
||||
if atag is not None:
|
||||
url = atag['href']
|
||||
#print("Checking "+url)
|
||||
if atag['href'].startswith('/'):
|
||||
url = self.url_prefix+atag['href']
|
||||
url = url.strip()
|
||||
# print("Checking >>"+url+'<<\n\r')
|
||||
if url.startswith('/'):
|
||||
url = self.url_prefix+url
|
||||
if url in self.url_list:
|
||||
return
|
||||
self.url_list.append(url)
|
||||
@ -171,10 +222,10 @@ class TimesColonist(BasicNewsRecipe):
|
||||
if dtag is not None:
|
||||
description = self.tag_to_string(dtag,False)
|
||||
article_list.append(dict(title=title,url=url,date='',description=description,author='',content=''))
|
||||
#print(sectitle+title+": description = "+description+" URL="+url)
|
||||
print(sectitle+title+": description = "+description+" URL="+url+'\n\r')
|
||||
|
||||
def add_section_index(self,ans,securl,sectitle):
|
||||
print("Add section url="+self.url_prefix+'/'+securl)
|
||||
print("Add section url="+self.url_prefix+'/'+securl+'\n\r')
|
||||
try:
|
||||
soup = self.index_to_soup(self.url_prefix+'/'+securl)
|
||||
except:
|
||||
@ -193,33 +244,7 @@ class TimesColonist(BasicNewsRecipe):
|
||||
|
||||
def parse_index(self):
|
||||
ans = []
|
||||
ans = self.add_section_index(ans,'','Web Front Page')
|
||||
ans = self.add_section_index(ans,'news/','News Headlines')
|
||||
ans = self.add_section_index(ans,'news/b-c/','BC News')
|
||||
ans = self.add_section_index(ans,'news/national/','Natioanl News')
|
||||
ans = self.add_section_index(ans,'news/world/','World News')
|
||||
ans = self.add_section_index(ans,'opinion/','Opinion')
|
||||
ans = self.add_section_index(ans,'opinion/letters/','Letters')
|
||||
ans = self.add_section_index(ans,'business/','Business')
|
||||
ans = self.add_section_index(ans,'business/money/','Money')
|
||||
ans = self.add_section_index(ans,'business/technology/','Technology')
|
||||
ans = self.add_section_index(ans,'business/working/','Working')
|
||||
ans = self.add_section_index(ans,'sports/','Sports')
|
||||
ans = self.add_section_index(ans,'sports/hockey/','Hockey')
|
||||
ans = self.add_section_index(ans,'sports/football/','Football')
|
||||
ans = self.add_section_index(ans,'sports/basketball/','Basketball')
|
||||
ans = self.add_section_index(ans,'sports/golf/','Golf')
|
||||
ans = self.add_section_index(ans,'entertainment/','entertainment')
|
||||
ans = self.add_section_index(ans,'entertainment/go/','Go!')
|
||||
ans = self.add_section_index(ans,'entertainment/music/','Music')
|
||||
ans = self.add_section_index(ans,'entertainment/books/','Books')
|
||||
ans = self.add_section_index(ans,'entertainment/Movies/','movies')
|
||||
ans = self.add_section_index(ans,'entertainment/television/','Television')
|
||||
ans = self.add_section_index(ans,'life/','Life')
|
||||
ans = self.add_section_index(ans,'life/health/','Health')
|
||||
ans = self.add_section_index(ans,'life/travel/','Travel')
|
||||
ans = self.add_section_index(ans,'life/driving/','Driving')
|
||||
ans = self.add_section_index(ans,'life/homes/','Homes')
|
||||
ans = self.add_section_index(ans,'life/food-drink/','Food & Drink')
|
||||
for (url,title) in self.section_list:
|
||||
ans = self.add_section_index(ans,url,title)
|
||||
return ans
|
||||
|
||||
|
@ -1,6 +1,3 @@
|
||||
" Project wide builtins
|
||||
let $PYFLAKES_BUILTINS = "_,dynamic_property,__,P,I,lopen,icu_lower,icu_upper,icu_title,ngettext"
|
||||
|
||||
" Include directories for C++ modules
|
||||
let g:syntastic_cpp_include_dirs = [
|
||||
\'/usr/include/python2.7',
|
||||
|
4
setup.cfg
Normal file
4
setup.cfg
Normal file
@ -0,0 +1,4 @@
|
||||
[flake8]
|
||||
max-line-length = 160
|
||||
builtins = _,dynamic_property,__,P,I,lopen,icu_lower,icu_upper,icu_title,ngettext
|
||||
ignore = E12,E22,E231,E301,E302,E304,E401,W391
|
@ -24,38 +24,10 @@ class Message:
|
||||
def __str__(self):
|
||||
return '%s:%s: %s' % (self.filename, self.lineno, self.msg)
|
||||
|
||||
def check_for_python_errors(code_string, filename):
|
||||
import _ast
|
||||
# First, compile into an AST and handle syntax errors.
|
||||
try:
|
||||
tree = compile(code_string, filename, "exec", _ast.PyCF_ONLY_AST)
|
||||
except (SyntaxError, IndentationError) as value:
|
||||
msg = value.args[0]
|
||||
|
||||
(lineno, offset, text) = value.lineno, value.offset, value.text
|
||||
|
||||
# If there's an encoding problem with the file, the text is None.
|
||||
if text is None:
|
||||
# Avoid using msg, since for the only known case, it contains a
|
||||
# bogus message that claims the encoding the file declared was
|
||||
# unknown.
|
||||
msg = "%s: problem decoding source" % filename
|
||||
|
||||
return [Message(filename, lineno, msg)]
|
||||
else:
|
||||
checker = __import__('pyflakes.checker').checker
|
||||
# Okay, it's syntactically valid. Now check it.
|
||||
w = checker.Checker(tree, filename)
|
||||
w.messages.sort(lambda a, b: cmp(a.lineno, b.lineno))
|
||||
return [Message(x.filename, x.lineno, x.message%x.message_args) for x in
|
||||
w.messages]
|
||||
|
||||
class Check(Command):
|
||||
|
||||
description = 'Check for errors in the calibre source code'
|
||||
|
||||
BUILTINS = ['_', '__', 'dynamic_property', 'I', 'P', 'lopen', 'icu_lower',
|
||||
'icu_upper', 'icu_title', 'ngettext']
|
||||
CACHE = '.check-cache.pickle'
|
||||
|
||||
def get_files(self, cache):
|
||||
@ -65,8 +37,8 @@ class Check(Command):
|
||||
mtime = os.stat(y).st_mtime
|
||||
if cache.get(y, 0) == mtime:
|
||||
continue
|
||||
if (f.endswith('.py') and f not in ('feedparser.py',
|
||||
'pyparsing.py', 'markdown.py') and
|
||||
if (f.endswith('.py') and f not in (
|
||||
'feedparser.py', 'pyparsing.py', 'markdown.py') and
|
||||
'prs500/driver.py' not in y):
|
||||
yield y, mtime
|
||||
if f.endswith('.coffee'):
|
||||
@ -79,21 +51,18 @@ class Check(Command):
|
||||
if f.endswith('.recipe') and cache.get(f, 0) != mtime:
|
||||
yield f, mtime
|
||||
|
||||
|
||||
def run(self, opts):
|
||||
cache = {}
|
||||
if os.path.exists(self.CACHE):
|
||||
cache = cPickle.load(open(self.CACHE, 'rb'))
|
||||
builtins = list(set_builtins(self.BUILTINS))
|
||||
for f, mtime in self.get_files(cache):
|
||||
self.info('\tChecking', f)
|
||||
errors = False
|
||||
ext = os.path.splitext(f)[1]
|
||||
if ext in {'.py', '.recipe'}:
|
||||
w = check_for_python_errors(open(f, 'rb').read(), f)
|
||||
if w:
|
||||
p = subprocess.Popen(['flake8', '--ignore=E,W', f])
|
||||
if p.wait() != 0:
|
||||
errors = True
|
||||
self.report_errors(w)
|
||||
else:
|
||||
from calibre.utils.serve_coffee import check_coffeescript
|
||||
try:
|
||||
@ -106,8 +75,6 @@ class Check(Command):
|
||||
self.j(self.SRC, '../session.vim'), '-f', f])
|
||||
raise SystemExit(1)
|
||||
cache[f] = mtime
|
||||
for x in builtins:
|
||||
delattr(__builtin__, x)
|
||||
cPickle.dump(cache, open(self.CACHE, 'wb'), -1)
|
||||
wn_path = os.path.expanduser('~/work/servers/src/calibre_servers/main')
|
||||
if os.path.exists(wn_path):
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
__appname__ = u'calibre'
|
||||
numeric_version = (0, 9, 26)
|
||||
numeric_version = (0, 9, 27)
|
||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||
|
||||
|
@ -757,9 +757,10 @@ from calibre.ebooks.metadata.sources.isbndb import ISBNDB
|
||||
from calibre.ebooks.metadata.sources.overdrive import OverDrive
|
||||
from calibre.ebooks.metadata.sources.douban import Douban
|
||||
from calibre.ebooks.metadata.sources.ozon import Ozon
|
||||
# from calibre.ebooks.metadata.sources.google_images import GoogleImages
|
||||
from calibre.ebooks.metadata.sources.google_images import GoogleImages
|
||||
from calibre.ebooks.metadata.sources.big_book_search import BigBookSearch
|
||||
|
||||
plugins += [GoogleBooks, Amazon, Edelweiss, OpenLibrary, ISBNDB, OverDrive, Douban, Ozon]
|
||||
plugins += [GoogleBooks, GoogleImages, Amazon, Edelweiss, OpenLibrary, ISBNDB, OverDrive, Douban, Ozon, BigBookSearch]
|
||||
|
||||
# }}}
|
||||
|
||||
@ -1467,6 +1468,17 @@ class StoreKoboStore(StoreBase):
|
||||
formats = ['EPUB']
|
||||
affiliate = True
|
||||
|
||||
class StoreKoobeStore(StoreBase):
|
||||
name = 'Koobe'
|
||||
author = u'Tomasz Długosz'
|
||||
description = u'Księgarnia internetowa oferuje ebooki (książki elektroniczne) w postaci plików epub, mobi i pdf.'
|
||||
actual_plugin = 'calibre.gui2.store.stores.koobe_plugin:KoobeStore'
|
||||
|
||||
drm_free_only = True
|
||||
headquarters = 'PL'
|
||||
formats = ['EPUB', 'MOBI', 'PDF']
|
||||
affiliate = True
|
||||
|
||||
class StoreLegimiStore(StoreBase):
|
||||
name = 'Legimi'
|
||||
author = u'Tomasz Długosz'
|
||||
@ -1649,6 +1661,7 @@ class StoreWoblinkStore(StoreBase):
|
||||
|
||||
headquarters = 'PL'
|
||||
formats = ['EPUB', 'MOBI', 'PDF', 'WOBLINK']
|
||||
affiliate = True
|
||||
|
||||
class XinXiiStore(StoreBase):
|
||||
name = 'XinXii'
|
||||
@ -1686,6 +1699,7 @@ plugins += [
|
||||
StoreGoogleBooksStore,
|
||||
StoreGutenbergStore,
|
||||
StoreKoboStore,
|
||||
StoreKoobeStore,
|
||||
StoreLegimiStore,
|
||||
StoreLibreDEStore,
|
||||
StoreLitResStore,
|
||||
|
@ -91,7 +91,7 @@ def restore_plugin_state_to_default(plugin_or_name):
|
||||
config['enabled_plugins'] = ep
|
||||
|
||||
default_disabled_plugins = set([
|
||||
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss', 'Google Images',
|
||||
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss', 'Google Images', 'Big Book Search',
|
||||
])
|
||||
|
||||
def is_disabled(plugin):
|
||||
|
@ -68,4 +68,5 @@ Various things that require other things before they can be migrated:
|
||||
libraries/switching/on calibre startup.
|
||||
3. From refresh in the legacy interface: Rember to flush the composite
|
||||
column template cache.
|
||||
4. Replace the metadatabackup thread with the new implementation when using the new backend.
|
||||
'''
|
||||
|
@ -41,7 +41,6 @@ Differences in semantics from pysqlite:
|
||||
'''
|
||||
|
||||
|
||||
|
||||
class DynamicFilter(object): # {{{
|
||||
|
||||
'No longer used, present for legacy compatibility'
|
||||
@ -114,9 +113,10 @@ class DBPrefs(dict): # {{{
|
||||
return default
|
||||
|
||||
def set_namespaced(self, namespace, key, val):
|
||||
if u':' in key: raise KeyError('Colons are not allowed in keys')
|
||||
if u':' in namespace: raise KeyError('Colons are not allowed in'
|
||||
' the namespace')
|
||||
if u':' in key:
|
||||
raise KeyError('Colons are not allowed in keys')
|
||||
if u':' in namespace:
|
||||
raise KeyError('Colons are not allowed in the namespace')
|
||||
key = u'namespaced:%s:%s'%(namespace, key)
|
||||
self[key] = val
|
||||
|
||||
@ -170,7 +170,8 @@ def pynocase(one, two, encoding='utf-8'):
|
||||
return cmp(one.lower(), two.lower())
|
||||
|
||||
def _author_to_author_sort(x):
|
||||
if not x: return ''
|
||||
if not x:
|
||||
return ''
|
||||
return author_to_author_sort(x.replace('|', ','))
|
||||
|
||||
def icu_collator(s1, s2):
|
||||
@ -1067,5 +1068,15 @@ class DB(object):
|
||||
break # Fail silently since nothing catastrophic has happened
|
||||
curpath = os.path.join(curpath, newseg)
|
||||
|
||||
def write_backup(self, path, raw):
|
||||
path = os.path.abspath(os.path.join(self.library_path, path, 'metadata.opf'))
|
||||
with lopen(path, 'wb') as f:
|
||||
f.write(raw)
|
||||
|
||||
def read_backup(self, path):
|
||||
path = os.path.abspath(os.path.join(self.library_path, path, 'metadata.opf'))
|
||||
with lopen(path, 'rb') as f:
|
||||
return f.read()
|
||||
|
||||
# }}}
|
||||
|
||||
|
115
src/calibre/db/backup.py
Normal file
115
src/calibre/db/backup.py
Normal file
@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import weakref, traceback
|
||||
from threading import Thread, Event
|
||||
|
||||
from calibre import prints
|
||||
from calibre.ebooks.metadata.opf2 import metadata_to_opf
|
||||
|
||||
class Abort(Exception):
|
||||
pass
|
||||
|
||||
class MetadataBackup(Thread):
|
||||
'''
|
||||
Continuously backup changed metadata into OPF files
|
||||
in the book directory. This class runs in its own
|
||||
thread.
|
||||
'''
|
||||
|
||||
def __init__(self, db, interval=2, scheduling_interval=0.1):
|
||||
Thread.__init__(self)
|
||||
self.daemon = True
|
||||
self._db = weakref.ref(db)
|
||||
self.stop_running = Event()
|
||||
self.interval = interval
|
||||
self.scheduling_interval = scheduling_interval
|
||||
|
||||
@property
|
||||
def db(self):
|
||||
ans = self._db()
|
||||
if ans is None:
|
||||
raise Abort()
|
||||
return ans
|
||||
|
||||
def stop(self):
|
||||
self.stop_running.set()
|
||||
|
||||
def wait(self, interval):
|
||||
if self.stop_running.wait(interval):
|
||||
raise Abort()
|
||||
|
||||
def run(self):
|
||||
while not self.stop_running.is_set():
|
||||
try:
|
||||
self.wait(self.interval)
|
||||
self.do_one()
|
||||
except Abort:
|
||||
break
|
||||
|
||||
def do_one(self):
|
||||
try:
|
||||
book_id = self.db.get_a_dirtied_book()
|
||||
if book_id is None:
|
||||
return
|
||||
except Abort:
|
||||
raise
|
||||
except:
|
||||
# Happens during interpreter shutdown
|
||||
return
|
||||
|
||||
self.wait(0)
|
||||
|
||||
try:
|
||||
mi, sequence = self.db.get_metadata_for_dump(book_id)
|
||||
except:
|
||||
prints('Failed to get backup metadata for id:', book_id, 'once')
|
||||
traceback.print_exc()
|
||||
self.wait(self.interval)
|
||||
try:
|
||||
mi, sequence = self.db.get_metadata_for_dump(book_id)
|
||||
except:
|
||||
prints('Failed to get backup metadata for id:', book_id, 'again, giving up')
|
||||
traceback.print_exc()
|
||||
return
|
||||
|
||||
if mi is None:
|
||||
self.db.clear_dirtied(book_id, sequence)
|
||||
|
||||
# Give the GUI thread a chance to do something. Python threads don't
|
||||
# have priorities, so this thread would naturally keep the processor
|
||||
# until some scheduling event happens. The wait makes such an event
|
||||
self.wait(self.scheduling_interval)
|
||||
|
||||
try:
|
||||
raw = metadata_to_opf(mi)
|
||||
except:
|
||||
prints('Failed to convert to opf for id:', book_id)
|
||||
traceback.print_exc()
|
||||
return
|
||||
|
||||
self.wait(self.scheduling_interval)
|
||||
|
||||
try:
|
||||
self.db.write_backup(book_id, raw)
|
||||
except:
|
||||
prints('Failed to write backup metadata for id:', book_id, 'once')
|
||||
self.wait(self.interval)
|
||||
try:
|
||||
self.db.write_backup(book_id, raw)
|
||||
except:
|
||||
prints('Failed to write backup metadata for id:', book_id, 'again, giving up')
|
||||
return
|
||||
|
||||
self.db.clear_dirtied(book_id, sequence)
|
||||
|
||||
def break_cycles(self):
|
||||
# Legacy compatibility
|
||||
pass
|
||||
|
@ -7,7 +7,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import os, traceback
|
||||
import os, traceback, random
|
||||
from io import BytesIO
|
||||
from collections import defaultdict
|
||||
from functools import wraps, partial
|
||||
@ -15,7 +15,7 @@ from functools import wraps, partial
|
||||
from calibre.constants import iswindows
|
||||
from calibre.db import SPOOL_SIZE
|
||||
from calibre.db.categories import get_categories
|
||||
from calibre.db.locking import create_locks, RecordLock
|
||||
from calibre.db.locking import create_locks
|
||||
from calibre.db.errors import NoSuchFormat
|
||||
from calibre.db.fields import create_field
|
||||
from calibre.db.search import Search
|
||||
@ -23,9 +23,10 @@ from calibre.db.tables import VirtualTable
|
||||
from calibre.db.write import get_series_values
|
||||
from calibre.db.lazy import FormatMetadata, FormatsList
|
||||
from calibre.ebooks.metadata.book.base import Metadata
|
||||
from calibre.ebooks.metadata.opf2 import metadata_to_opf
|
||||
from calibre.ptempfile import (base_dir, PersistentTemporaryFile,
|
||||
SpooledTemporaryFile)
|
||||
from calibre.utils.date import now
|
||||
from calibre.utils.date import now as nowf
|
||||
from calibre.utils.icu import sort_key
|
||||
|
||||
def api(f):
|
||||
@ -57,9 +58,10 @@ class Cache(object):
|
||||
self.fields = {}
|
||||
self.composites = set()
|
||||
self.read_lock, self.write_lock = create_locks()
|
||||
self.record_lock = RecordLock(self.read_lock)
|
||||
self.format_metadata_cache = defaultdict(dict)
|
||||
self.formatter_template_cache = {}
|
||||
self.dirtied_cache = {}
|
||||
self.dirtied_sequence = 0
|
||||
self._search_api = Search(self.field_metadata.get_search_terms())
|
||||
|
||||
# Implement locking for all simple read/write API methods
|
||||
@ -78,17 +80,18 @@ class Cache(object):
|
||||
|
||||
self.initialize_dynamic()
|
||||
|
||||
@write_api
|
||||
def initialize_dynamic(self):
|
||||
# Reconstruct the user categories, putting them into field_metadata
|
||||
# Assumption is that someone else will fix them if they change.
|
||||
self.field_metadata.remove_dynamic_categories()
|
||||
for user_cat in sorted(self.pref('user_categories', {}).iterkeys(), key=sort_key):
|
||||
for user_cat in sorted(self._pref('user_categories', {}).iterkeys(), key=sort_key):
|
||||
cat_name = '@' + user_cat # add the '@' to avoid name collision
|
||||
self.field_metadata.add_user_category(label=cat_name, name=user_cat)
|
||||
|
||||
# add grouped search term user categories
|
||||
muc = frozenset(self.pref('grouped_search_make_user_categories', []))
|
||||
for cat in sorted(self.pref('grouped_search_terms', {}).iterkeys(), key=sort_key):
|
||||
muc = frozenset(self._pref('grouped_search_make_user_categories', []))
|
||||
for cat in sorted(self._pref('grouped_search_terms', {}).iterkeys(), key=sort_key):
|
||||
if cat in muc:
|
||||
# There is a chance that these can be duplicates of an existing
|
||||
# user category. Print the exception and continue.
|
||||
@ -102,10 +105,15 @@ class Cache(object):
|
||||
# self.field_metadata.add_search_category(label='search', name=_('Searches'))
|
||||
|
||||
self.field_metadata.add_grouped_search_terms(
|
||||
self.pref('grouped_search_terms', {}))
|
||||
self._pref('grouped_search_terms', {}))
|
||||
|
||||
self._search_api.change_locations(self.field_metadata.get_search_terms())
|
||||
|
||||
self.dirtied_cache = {x:i for i, (x,) in enumerate(
|
||||
self.backend.conn.execute('SELECT book FROM metadata_dirtied'))}
|
||||
if self.dirtied_cache:
|
||||
self.dirtied_sequence = max(self.dirtied_cache.itervalues())+1
|
||||
|
||||
@property
|
||||
def field_metadata(self):
|
||||
return self.backend.field_metadata
|
||||
@ -131,7 +139,7 @@ class Cache(object):
|
||||
mi.author_link_map = aul
|
||||
mi.comments = self._field_for('comments', book_id)
|
||||
mi.publisher = self._field_for('publisher', book_id)
|
||||
n = now()
|
||||
n = nowf()
|
||||
mi.timestamp = self._field_for('timestamp', book_id, default_value=n)
|
||||
mi.pubdate = self._field_for('pubdate', book_id, default_value=n)
|
||||
mi.uuid = self._field_for('uuid', book_id,
|
||||
@ -395,16 +403,19 @@ class Cache(object):
|
||||
'''
|
||||
if as_file:
|
||||
ret = SpooledTemporaryFile(SPOOL_SIZE)
|
||||
if not self.copy_cover_to(book_id, ret): return
|
||||
if not self.copy_cover_to(book_id, ret):
|
||||
return
|
||||
ret.seek(0)
|
||||
elif as_path:
|
||||
pt = PersistentTemporaryFile('_dbcover.jpg')
|
||||
with pt:
|
||||
if not self.copy_cover_to(book_id, pt): return
|
||||
if not self.copy_cover_to(book_id, pt):
|
||||
return
|
||||
ret = pt.name
|
||||
else:
|
||||
buf = BytesIO()
|
||||
if not self.copy_cover_to(book_id, buf): return
|
||||
if not self.copy_cover_to(book_id, buf):
|
||||
return
|
||||
ret = buf.getvalue()
|
||||
if as_image:
|
||||
from PyQt4.Qt import QImage
|
||||
@ -413,7 +424,7 @@ class Cache(object):
|
||||
ret = i
|
||||
return ret
|
||||
|
||||
@api
|
||||
@read_api
|
||||
def copy_cover_to(self, book_id, dest, use_hardlink=False):
|
||||
'''
|
||||
Copy the cover to the file like object ``dest``. Returns False
|
||||
@ -422,17 +433,15 @@ class Cache(object):
|
||||
copied to it iff the path is different from the current path (taking
|
||||
case sensitivity into account).
|
||||
'''
|
||||
with self.read_lock:
|
||||
try:
|
||||
path = self._field_for('path', book_id).replace('/', os.sep)
|
||||
except:
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
with self.record_lock.lock(book_id):
|
||||
return self.backend.copy_cover_to(path, dest,
|
||||
use_hardlink=use_hardlink)
|
||||
|
||||
@api
|
||||
@read_api
|
||||
def copy_format_to(self, book_id, fmt, dest, use_hardlink=False):
|
||||
'''
|
||||
Copy the format ``fmt`` to the file like object ``dest``. If the
|
||||
@ -441,14 +450,12 @@ class Cache(object):
|
||||
the path is different from the current path (taking case sensitivity
|
||||
into account).
|
||||
'''
|
||||
with self.read_lock:
|
||||
try:
|
||||
name = self.fields['formats'].format_fname(book_id, fmt)
|
||||
path = self._field_for('path', book_id).replace('/', os.sep)
|
||||
except:
|
||||
except (KeyError, AttributeError):
|
||||
raise NoSuchFormat('Record %d has no %s file'%(book_id, fmt))
|
||||
|
||||
with self.record_lock.lock(book_id):
|
||||
return self.backend.copy_format_to(book_id, fmt, name, path, dest,
|
||||
use_hardlink=use_hardlink)
|
||||
|
||||
@ -520,16 +527,16 @@ class Cache(object):
|
||||
this means that repeated calls yield the same
|
||||
temp file (which is re-created each time)
|
||||
'''
|
||||
with self.read_lock:
|
||||
ext = ('.'+fmt.lower()) if fmt else ''
|
||||
if as_path:
|
||||
if preserve_filename:
|
||||
with self.read_lock:
|
||||
try:
|
||||
fname = self.fields['formats'].format_fname(book_id, fmt)
|
||||
except:
|
||||
return None
|
||||
fname += ext
|
||||
|
||||
if as_path:
|
||||
if preserve_filename:
|
||||
bd = base_dir()
|
||||
d = os.path.join(bd, 'format_abspath')
|
||||
try:
|
||||
@ -537,21 +544,26 @@ class Cache(object):
|
||||
except:
|
||||
pass
|
||||
ret = os.path.join(d, fname)
|
||||
with self.record_lock.lock(book_id):
|
||||
try:
|
||||
self.copy_format_to(book_id, fmt, ret)
|
||||
except NoSuchFormat:
|
||||
return None
|
||||
else:
|
||||
with PersistentTemporaryFile(ext) as pt, self.record_lock.lock(book_id):
|
||||
with PersistentTemporaryFile(ext) as pt:
|
||||
try:
|
||||
self.copy_format_to(book_id, fmt, pt)
|
||||
except NoSuchFormat:
|
||||
return None
|
||||
ret = pt.name
|
||||
elif as_file:
|
||||
with self.read_lock:
|
||||
try:
|
||||
fname = self.fields['formats'].format_fname(book_id, fmt)
|
||||
except:
|
||||
return None
|
||||
fname += ext
|
||||
|
||||
ret = SpooledTemporaryFile(SPOOL_SIZE)
|
||||
with self.record_lock.lock(book_id):
|
||||
try:
|
||||
self.copy_format_to(book_id, fmt, ret)
|
||||
except NoSuchFormat:
|
||||
@ -562,7 +574,6 @@ class Cache(object):
|
||||
ret.name = fname
|
||||
else:
|
||||
buf = BytesIO()
|
||||
with self.record_lock.lock(book_id):
|
||||
try:
|
||||
self.copy_format_to(book_id, fmt, buf)
|
||||
except NoSuchFormat:
|
||||
@ -620,6 +631,30 @@ class Cache(object):
|
||||
return get_categories(self, sort=sort, book_ids=book_ids,
|
||||
icon_map=icon_map)
|
||||
|
||||
@write_api
|
||||
def update_last_modified(self, book_ids, now=None):
|
||||
if now is None:
|
||||
now = nowf()
|
||||
if book_ids:
|
||||
f = self.fields['last_modified']
|
||||
f.writer.set_books({book_id:now for book_id in book_ids}, self.backend)
|
||||
|
||||
@write_api
|
||||
def mark_as_dirty(self, book_ids):
|
||||
self._update_last_modified(book_ids)
|
||||
already_dirtied = set(self.dirtied_cache).intersection(book_ids)
|
||||
new_dirtied = book_ids - already_dirtied
|
||||
already_dirtied = {book_id:self.dirtied_sequence+i for i, book_id in enumerate(already_dirtied)}
|
||||
if already_dirtied:
|
||||
self.dirtied_sequence = max(already_dirtied.itervalues()) + 1
|
||||
self.dirtied_cache.update(already_dirtied)
|
||||
if new_dirtied:
|
||||
self.backend.conn.executemany('INSERT OR IGNORE INTO metadata_dirtied (book) VALUES (?)',
|
||||
((x,) for x in new_dirtied))
|
||||
new_dirtied = {book_id:self.dirtied_sequence+i for i, book_id in enumerate(new_dirtied)}
|
||||
self.dirtied_sequence = max(new_dirtied.itervalues()) + 1
|
||||
self.dirtied_cache.update(new_dirtied)
|
||||
|
||||
@write_api
|
||||
def set_field(self, name, book_id_to_val_map, allow_case_change=True):
|
||||
f = self.fields[name]
|
||||
@ -657,7 +692,7 @@ class Cache(object):
|
||||
if dirtied and update_path:
|
||||
self._update_path(dirtied, mark_as_dirtied=False)
|
||||
|
||||
# TODO: Mark these as dirtied so that the opf is regenerated
|
||||
self._mark_as_dirty(dirtied)
|
||||
|
||||
return dirtied
|
||||
|
||||
@ -668,9 +703,111 @@ class Cache(object):
|
||||
author = self._field_for('authors', book_id, default_value=(_('Unknown'),))[0]
|
||||
self.backend.update_path(book_id, title, author, self.fields['path'], self.fields['formats'])
|
||||
if mark_as_dirtied:
|
||||
self._mark_as_dirty(book_ids)
|
||||
|
||||
@read_api
|
||||
def get_a_dirtied_book(self):
|
||||
if self.dirtied_cache:
|
||||
return random.choice(tuple(self.dirtied_cache.iterkeys()))
|
||||
return None
|
||||
|
||||
@read_api
|
||||
def get_metadata_for_dump(self, book_id):
|
||||
mi = None
|
||||
# get the current sequence number for this book to pass back to the
|
||||
# backup thread. This will avoid double calls in the case where the
|
||||
# thread has not done the work between the put and the get_metadata
|
||||
sequence = self.dirtied_cache.get(book_id, None)
|
||||
if sequence is not None:
|
||||
try:
|
||||
# While a book is being created, the path is empty. Don't bother to
|
||||
# try to write the opf, because it will go to the wrong folder.
|
||||
if self._field_for('path', book_id):
|
||||
mi = self._get_metadata(book_id)
|
||||
# Always set cover to cover.jpg. Even if cover doesn't exist,
|
||||
# no harm done. This way no need to call dirtied when
|
||||
# cover is set/removed
|
||||
mi.cover = 'cover.jpg'
|
||||
except:
|
||||
# This almost certainly means that the book has been deleted while
|
||||
# the backup operation sat in the queue.
|
||||
pass
|
||||
# TODO: Mark these books as dirtied so that metadata.opf is
|
||||
# re-created
|
||||
return mi, sequence
|
||||
|
||||
@write_api
|
||||
def clear_dirtied(self, book_id, sequence):
|
||||
'''
|
||||
Clear the dirtied indicator for the books. This is used when fetching
|
||||
metadata, creating an OPF, and writing a file are separated into steps.
|
||||
The last step is clearing the indicator
|
||||
'''
|
||||
dc_sequence = self.dirtied_cache.get(book_id, None)
|
||||
if dc_sequence is None or sequence is None or dc_sequence == sequence:
|
||||
self.backend.conn.execute('DELETE FROM metadata_dirtied WHERE book=?',
|
||||
(book_id,))
|
||||
self.dirtied_cache.pop(book_id, None)
|
||||
|
||||
@write_api
|
||||
def write_backup(self, book_id, raw):
|
||||
try:
|
||||
path = self._field_for('path', book_id).replace('/', os.sep)
|
||||
except:
|
||||
return
|
||||
|
||||
self.backend.write_backup(path, raw)
|
||||
|
||||
@read_api
|
||||
def dirty_queue_length(self):
|
||||
return len(self.dirtied_cache)
|
||||
|
||||
@read_api
|
||||
def read_backup(self, book_id):
|
||||
''' Return the OPF metadata backup for the book as a bytestring or None
|
||||
if no such backup exists. '''
|
||||
try:
|
||||
path = self._field_for('path', book_id).replace('/', os.sep)
|
||||
except:
|
||||
return
|
||||
|
||||
try:
|
||||
return self.backend.read_backup(path)
|
||||
except EnvironmentError:
|
||||
return None
|
||||
|
||||
@write_api
|
||||
def dump_metadata(self, book_ids=None, remove_from_dirtied=True,
|
||||
callback=None):
|
||||
'''
|
||||
Write metadata for each record to an individual OPF file. If callback
|
||||
is not None, it is called once at the start with the number of book_ids
|
||||
being processed. And once for every book_id, with arguments (book_id,
|
||||
mi, ok).
|
||||
'''
|
||||
if book_ids is None:
|
||||
book_ids = set(self.dirtied_cache)
|
||||
|
||||
if callback is not None:
|
||||
callback(len(book_ids), True, False)
|
||||
|
||||
for book_id in book_ids:
|
||||
if self._field_for('path', book_id) is None:
|
||||
if callback is not None:
|
||||
callback(book_id, None, False)
|
||||
continue
|
||||
mi, sequence = self._get_metadata_for_dump(book_id)
|
||||
if mi is None:
|
||||
if callback is not None:
|
||||
callback(book_id, mi, False)
|
||||
continue
|
||||
try:
|
||||
raw = metadata_to_opf(mi)
|
||||
self._write_backup(book_id, raw)
|
||||
if remove_from_dirtied:
|
||||
self._clear_dirtied(book_id, sequence)
|
||||
except:
|
||||
pass
|
||||
if callback is not None:
|
||||
callback(book_id, mi, True)
|
||||
|
||||
# }}}
|
||||
|
||||
|
@ -191,7 +191,7 @@ class SHLock(object): # {{{
|
||||
try:
|
||||
return self._free_waiters.pop()
|
||||
except IndexError:
|
||||
return Condition(self._lock)#, verbose=True)
|
||||
return Condition(self._lock)
|
||||
|
||||
def _return_waiter(self, waiter):
|
||||
self._free_waiters.append(waiter)
|
||||
|
@ -172,7 +172,6 @@ class SchemaUpgrade(object):
|
||||
'''
|
||||
)
|
||||
|
||||
|
||||
def upgrade_version_6(self):
|
||||
'Show authors in order'
|
||||
self.conn.execute('''
|
||||
|
@ -64,7 +64,7 @@ def _match(query, value, matchkind, use_primary_find_in_search=True):
|
||||
else:
|
||||
internal_match_ok = False
|
||||
for t in value:
|
||||
try: ### ignore regexp exceptions, required because search-ahead tries before typing is finished
|
||||
try: # ignore regexp exceptions, required because search-ahead tries before typing is finished
|
||||
t = icu_lower(t)
|
||||
if (matchkind == EQUALS_MATCH):
|
||||
if internal_match_ok:
|
||||
@ -547,7 +547,8 @@ class Parser(SearchQueryParser):
|
||||
field_metadata = {}
|
||||
|
||||
for x, fm in self.field_metadata.iteritems():
|
||||
if x.startswith('@'): continue
|
||||
if x.startswith('@'):
|
||||
continue
|
||||
if fm['search_terms'] and x != 'series_sort':
|
||||
all_locs.add(x)
|
||||
field_metadata[x] = fm
|
||||
|
@ -9,15 +9,32 @@ __docformat__ = 'restructuredtext en'
|
||||
|
||||
import unittest, os, argparse
|
||||
|
||||
try:
|
||||
import init_calibre # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
def find_tests():
|
||||
return unittest.defaultTestLoader.discover(os.path.dirname(os.path.abspath(__file__)), pattern='*.py')
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('name', nargs='?', default=None, help='The name of the test to run, for e.g. writing.WritingTest.many_many_basic')
|
||||
parser.add_argument('name', nargs='?', default=None,
|
||||
help='The name of the test to run, for e.g. writing.WritingTest.many_many_basic or .many_many_basic for a shortcut')
|
||||
args = parser.parse_args()
|
||||
if args.name:
|
||||
unittest.TextTestRunner(verbosity=4).run(unittest.defaultTestLoader.loadTestsFromName(args.name))
|
||||
if args.name and args.name.startswith('.'):
|
||||
tests = find_tests()
|
||||
ans = None
|
||||
try:
|
||||
for suite in tests:
|
||||
for test in suite._tests:
|
||||
for s in test:
|
||||
if s._testMethodName == args.name[1:]:
|
||||
tests = s
|
||||
raise StopIteration()
|
||||
except StopIteration:
|
||||
pass
|
||||
else:
|
||||
unittest.TextTestRunner(verbosity=4).run(find_tests())
|
||||
tests = unittest.defaultTestLoader.loadTestsFromName(args.name) if args.name else find_tests()
|
||||
unittest.TextTestRunner(verbosity=4).run(tests)
|
||||
|
||||
|
@ -8,6 +8,7 @@ __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import datetime
|
||||
from io import BytesIO
|
||||
|
||||
from calibre.utils.date import utc_tz
|
||||
from calibre.db.tests.base import BaseTest
|
||||
@ -205,6 +206,9 @@ class ReadingTest(BaseTest):
|
||||
else:
|
||||
self.assertEqual(cdata, cache.cover(book_id, as_path=True),
|
||||
'Reading of null cover as path failed')
|
||||
buf = BytesIO()
|
||||
self.assertFalse(cache.copy_cover_to(99999, buf), 'copy_cover_to() did not return False for non-existent book_id')
|
||||
self.assertFalse(cache.copy_cover_to(3, buf), 'copy_cover_to() did not return False for non-existent cover')
|
||||
|
||||
# }}}
|
||||
|
||||
@ -305,6 +309,7 @@ class ReadingTest(BaseTest):
|
||||
def test_get_formats(self): # {{{
|
||||
'Test reading ebook formats using the format() method'
|
||||
from calibre.library.database2 import LibraryDatabase2
|
||||
from calibre.db.cache import NoSuchFormat
|
||||
old = LibraryDatabase2(self.library_path)
|
||||
ids = old.all_ids()
|
||||
lf = {i:set(old.formats(i, index_is_id=True).split(',')) if old.formats(
|
||||
@ -332,6 +337,9 @@ class ReadingTest(BaseTest):
|
||||
self.assertEqual(old, f.read(),
|
||||
'Failed to read format as path')
|
||||
|
||||
buf = BytesIO()
|
||||
self.assertRaises(NoSuchFormat, cache.copy_format_to, 99999, 'X', buf, 'copy_format_to() failed to raise an exception for non-existent book')
|
||||
self.assertRaises(NoSuchFormat, cache.copy_format_to, 1, 'X', buf, 'copy_format_to() failed to raise an exception for non-existent format')
|
||||
|
||||
# }}}
|
||||
|
||||
|
@ -9,6 +9,7 @@ __docformat__ = 'restructuredtext en'
|
||||
|
||||
from collections import namedtuple
|
||||
from functools import partial
|
||||
from io import BytesIO
|
||||
|
||||
from calibre.ebooks.metadata import author_to_author_sort
|
||||
from calibre.utils.date import UNDEFINED_DATE
|
||||
@ -16,6 +17,7 @@ from calibre.db.tests.base import BaseTest
|
||||
|
||||
class WritingTest(BaseTest):
|
||||
|
||||
# Utils {{{
|
||||
def create_getter(self, name, getter=None):
|
||||
if getter is None:
|
||||
if name.endswith('_index'):
|
||||
@ -70,6 +72,7 @@ class WritingTest(BaseTest):
|
||||
'Failed setting for %s, sqlite value not the same: %r != %r'%(
|
||||
test.name, old_sqlite_res, sqlite_res))
|
||||
del db
|
||||
# }}}
|
||||
|
||||
def test_one_one(self): # {{{
|
||||
'Test setting of values in one-one fields'
|
||||
@ -289,6 +292,67 @@ class WritingTest(BaseTest):
|
||||
ae(c.field_for('sort', 1), 'Moose, The')
|
||||
ae(c.field_for('sort', 2), 'Cat')
|
||||
|
||||
|
||||
# }}}
|
||||
|
||||
def test_dirtied(self): # {{{
|
||||
'Test the setting of the dirtied flag and the last_modified column'
|
||||
cl = self.cloned_library
|
||||
cache = self.init_cache(cl)
|
||||
ae, af, sf = self.assertEqual, self.assertFalse, cache.set_field
|
||||
# First empty dirtied
|
||||
cache.dump_metadata()
|
||||
af(cache.dirtied_cache)
|
||||
af(self.init_cache(cl).dirtied_cache)
|
||||
|
||||
prev = cache.field_for('last_modified', 3)
|
||||
import calibre.db.cache as c
|
||||
from datetime import timedelta
|
||||
utime = prev+timedelta(days=1)
|
||||
onowf = c.nowf
|
||||
c.nowf = lambda: utime
|
||||
try:
|
||||
ae(sf('title', {3:'xxx'}), set([3]))
|
||||
self.assertTrue(3 in cache.dirtied_cache)
|
||||
ae(cache.field_for('last_modified', 3), utime)
|
||||
cache.dump_metadata()
|
||||
raw = cache.read_backup(3)
|
||||
from calibre.ebooks.metadata.opf2 import OPF
|
||||
opf = OPF(BytesIO(raw))
|
||||
ae(opf.title, 'xxx')
|
||||
finally:
|
||||
c.nowf = onowf
|
||||
# }}}
|
||||
|
||||
def test_backup(self): # {{{
|
||||
'Test the automatic backup of changed metadata'
|
||||
cl = self.cloned_library
|
||||
cache = self.init_cache(cl)
|
||||
ae, af, sf, ff = self.assertEqual, self.assertFalse, cache.set_field, cache.field_for
|
||||
# First empty dirtied
|
||||
cache.dump_metadata()
|
||||
af(cache.dirtied_cache)
|
||||
from calibre.db.backup import MetadataBackup
|
||||
interval = 0.01
|
||||
mb = MetadataBackup(cache, interval=interval, scheduling_interval=0)
|
||||
mb.start()
|
||||
try:
|
||||
ae(sf('title', {1:'title1', 2:'title2', 3:'title3'}), {1,2,3})
|
||||
ae(sf('authors', {1:'author1 & author2', 2:'author1 & author2', 3:'author1 & author2'}), {1,2,3})
|
||||
count = 6
|
||||
while cache.dirty_queue_length() and count > 0:
|
||||
mb.join(interval)
|
||||
count -= 1
|
||||
af(cache.dirty_queue_length())
|
||||
finally:
|
||||
mb.stop()
|
||||
mb.join(interval)
|
||||
af(mb.is_alive())
|
||||
from calibre.ebooks.metadata.opf2 import OPF
|
||||
for book_id in (1, 2, 3):
|
||||
raw = cache.read_backup(book_id)
|
||||
opf = OPF(BytesIO(raw))
|
||||
ae(opf.title, 'title%d'%book_id)
|
||||
ae(opf.authors, ['author1', 'author2'])
|
||||
# }}}
|
||||
|
||||
|
||||
|
@ -97,6 +97,12 @@ class TXTInput(InputFormatPlugin):
|
||||
if not ienc:
|
||||
ienc = 'utf-8'
|
||||
log.debug('No input encoding specified and could not auto detect using %s' % ienc)
|
||||
# Remove BOM from start of txt as its presence can confuse markdown
|
||||
import codecs
|
||||
for bom in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE, codecs.BOM_UTF8, codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE):
|
||||
if txt.startswith(bom):
|
||||
txt = txt[len(bom):]
|
||||
break
|
||||
txt = txt.decode(ienc, 'replace')
|
||||
|
||||
# Replace entities
|
||||
|
@ -68,7 +68,6 @@ class Resource(object): # {{{
|
||||
self.path = os.path.abspath(os.path.join(basedir, pc.replace('/', os.sep)))
|
||||
self.fragment = url[-1]
|
||||
|
||||
|
||||
def href(self, basedir=None):
|
||||
'''
|
||||
Return a URL pointing to this resource. If it is a file on the filesystem
|
||||
@ -180,7 +179,6 @@ class ManifestItem(Resource): # {{{
|
||||
self.mime_type = val
|
||||
return property(fget=fget, fset=fset)
|
||||
|
||||
|
||||
def __unicode__(self):
|
||||
return u'<item id="%s" href="%s" media-type="%s" />'%(self.id, self.href(), self.media_type)
|
||||
|
||||
@ -190,7 +188,6 @@ class ManifestItem(Resource): # {{{
|
||||
def __repr__(self):
|
||||
return unicode(self)
|
||||
|
||||
|
||||
def __getitem__(self, index):
|
||||
if index == 0:
|
||||
return self.href()
|
||||
@ -245,7 +242,6 @@ class Manifest(ResourceCollection): # {{{
|
||||
ResourceCollection.__init__(self)
|
||||
self.next_id = 1
|
||||
|
||||
|
||||
def item(self, id):
|
||||
for i in self:
|
||||
if i.id == id:
|
||||
@ -309,13 +305,10 @@ class Spine(ResourceCollection): # {{{
|
||||
continue
|
||||
return s
|
||||
|
||||
|
||||
|
||||
def __init__(self, manifest):
|
||||
ResourceCollection.__init__(self)
|
||||
self.manifest = manifest
|
||||
|
||||
|
||||
def replace(self, start, end, ids):
|
||||
'''
|
||||
Replace the items between start (inclusive) and end (not inclusive) with
|
||||
@ -363,7 +356,6 @@ class Guide(ResourceCollection): # {{{
|
||||
ans += 'title="%s" '%self.title
|
||||
return ans + '/>'
|
||||
|
||||
|
||||
@staticmethod
|
||||
def from_opf_guide(references, base_dir=os.getcwdu()):
|
||||
coll = Guide()
|
||||
@ -501,9 +493,10 @@ class OPF(object): # {{{
|
||||
CONTENT = XPath('self::*[re:match(name(), "meta$", "i")]/@content')
|
||||
TEXT = XPath('string()')
|
||||
|
||||
|
||||
metadata_path = XPath('descendant::*[re:match(name(), "metadata", "i")]')
|
||||
metadata_elem_path = XPath('descendant::*[re:match(name(), concat($name, "$"), "i") or (re:match(name(), "meta$", "i") and re:match(@name, concat("^calibre:", $name, "$"), "i"))]')
|
||||
metadata_elem_path = XPath(
|
||||
'descendant::*[re:match(name(), concat($name, "$"), "i") or (re:match(name(), "meta$", "i") '
|
||||
'and re:match(@name, concat("^calibre:", $name, "$"), "i"))]')
|
||||
title_path = XPath('descendant::*[re:match(name(), "title", "i")]')
|
||||
authors_path = XPath('descendant::*[re:match(name(), "creator", "i") and (@role="aut" or @opf:role="aut" or (not(@role) and not(@opf:role)))]')
|
||||
bkp_path = XPath('descendant::*[re:match(name(), "contributor", "i") and (@role="bkp" or @opf:role="bkp")]')
|
||||
@ -640,7 +633,8 @@ class OPF(object): # {{{
|
||||
if 'toc' in item.href().lower():
|
||||
toc = item.path
|
||||
|
||||
if toc is None: return
|
||||
if toc is None:
|
||||
return
|
||||
self.toc = TOC(base_path=self.base_dir)
|
||||
is_ncx = getattr(self, 'manifest', None) is not None and \
|
||||
self.manifest.type_for_id(toc) is not None and \
|
||||
@ -976,7 +970,6 @@ class OPF(object): # {{{
|
||||
|
||||
return property(fget=fget, fset=fset)
|
||||
|
||||
|
||||
@dynamic_property
|
||||
def language(self):
|
||||
|
||||
@ -990,7 +983,6 @@ class OPF(object): # {{{
|
||||
|
||||
return property(fget=fget, fset=fset)
|
||||
|
||||
|
||||
@dynamic_property
|
||||
def languages(self):
|
||||
|
||||
@ -1015,7 +1007,6 @@ class OPF(object): # {{{
|
||||
|
||||
return property(fget=fget, fset=fset)
|
||||
|
||||
|
||||
@dynamic_property
|
||||
def book_producer(self):
|
||||
|
||||
@ -1196,7 +1187,6 @@ class OPFCreator(Metadata):
|
||||
if self.cover:
|
||||
self.guide.set_cover(self.cover)
|
||||
|
||||
|
||||
def create_manifest(self, entries):
|
||||
'''
|
||||
Create <manifest>
|
||||
|
@ -132,7 +132,7 @@ class Worker(Thread): # Get details {{{
|
||||
text()="Détails sur le produit" or \
|
||||
text()="Detalles del producto" or \
|
||||
text()="Detalhes do produto" or \
|
||||
text()="登録情報"]/../div[@class="content"]
|
||||
starts-with(text(), "登録情報")]/../div[@class="content"]
|
||||
'''
|
||||
# Editor: is for Spanish
|
||||
self.publisher_xpath = '''
|
||||
@ -235,6 +235,12 @@ class Worker(Thread): # Get details {{{
|
||||
msg = 'Failed to parse amazon details page: %r'%self.url
|
||||
self.log.exception(msg)
|
||||
return
|
||||
if self.domain == 'jp':
|
||||
for a in root.xpath('//a[@href]'):
|
||||
if 'black-curtain-redirect.html' in a.get('href'):
|
||||
self.url = 'http://amazon.co.jp'+a.get('href')
|
||||
self.log('Black curtain redirect found, following')
|
||||
return self.get_details()
|
||||
|
||||
errmsg = root.xpath('//*[@id="errorMessage"]')
|
||||
if errmsg:
|
||||
@ -252,8 +258,8 @@ class Worker(Thread): # Get details {{{
|
||||
self.log.exception('Error parsing asin for url: %r'%self.url)
|
||||
asin = None
|
||||
if self.testing:
|
||||
import tempfile
|
||||
with tempfile.NamedTemporaryFile(prefix=asin + '_',
|
||||
import tempfile, uuid
|
||||
with tempfile.NamedTemporaryFile(prefix=(asin or str(uuid.uuid4()))+ '_',
|
||||
suffix='.html', delete=False) as f:
|
||||
f.write(raw)
|
||||
print ('Downloaded html for', asin, 'saved in', f.name)
|
||||
@ -270,7 +276,6 @@ class Worker(Thread): # Get details {{{
|
||||
self.log.exception('Error parsing authors for url: %r'%self.url)
|
||||
authors = []
|
||||
|
||||
|
||||
if not title or not authors or not asin:
|
||||
self.log.error('Could not find title/authors/asin for %r'%self.url)
|
||||
self.log.error('ASIN: %r Title: %r Authors: %r'%(asin, title,
|
||||
@ -425,7 +430,6 @@ class Worker(Thread): # Get details {{{
|
||||
desc = re.sub(r'(?s)<!--.*?-->', '', desc)
|
||||
return sanitize_comments_html(desc)
|
||||
|
||||
|
||||
def parse_comments(self, root):
|
||||
ans = ''
|
||||
desc = root.xpath('//div[@id="ps-content"]/div[@class="content"]')
|
||||
@ -499,7 +503,7 @@ class Worker(Thread): # Get details {{{
|
||||
def parse_language(self, pd):
|
||||
for x in reversed(pd.xpath(self.language_xpath)):
|
||||
if x.tail:
|
||||
raw = x.tail.strip()
|
||||
raw = x.tail.strip().partition(',')[0].strip()
|
||||
ans = self.lang_map.get(raw, None)
|
||||
if ans:
|
||||
return ans
|
||||
@ -631,7 +635,6 @@ class Amazon(Source):
|
||||
mi.tags = list(map(fixcase, mi.tags))
|
||||
mi.isbn = check_isbn(mi.isbn)
|
||||
|
||||
|
||||
def create_query(self, log, title=None, authors=None, identifiers={}, # {{{
|
||||
domain=None):
|
||||
if domain is None:
|
||||
@ -718,7 +721,10 @@ class Amazon(Source):
|
||||
|
||||
def title_ok(title):
|
||||
title = title.lower()
|
||||
for x in ('bulk pack', '[audiobook]', '[audio cd]'):
|
||||
bad = ['bulk pack', '[audiobook]', '[audio cd]']
|
||||
if self.domain == 'com':
|
||||
bad.append('(spanish edition)')
|
||||
for x in bad:
|
||||
if x in title:
|
||||
return False
|
||||
return True
|
||||
@ -745,7 +751,6 @@ class Amazon(Source):
|
||||
matches.append(a.get('href'))
|
||||
break
|
||||
|
||||
|
||||
# Keep only the top 5 matches as the matches are sorted by relevance by
|
||||
# Amazon so lower matches are not likely to be very relevant
|
||||
return matches[:5]
|
||||
@ -789,7 +794,6 @@ class Amazon(Source):
|
||||
log.exception(msg)
|
||||
return as_unicode(msg)
|
||||
|
||||
|
||||
raw = clean_ascii_chars(xml_to_unicode(raw,
|
||||
strip_encoding_pats=True, resolve_entities=True)[0])
|
||||
|
||||
@ -819,7 +823,6 @@ class Amazon(Source):
|
||||
# The error is almost always a not found error
|
||||
found = False
|
||||
|
||||
|
||||
if found:
|
||||
matches = self.parse_results_page(root)
|
||||
|
||||
@ -901,6 +904,11 @@ if __name__ == '__main__': # tests {{{
|
||||
isbn_test, title_test, authors_test, comments_test, series_test)
|
||||
com_tests = [ # {{{
|
||||
|
||||
( # Has a spanish edition
|
||||
{'title':'11/22/63'},
|
||||
[title_test('11/22/63: A Novel', exact=True), authors_test(['Stephen King']),]
|
||||
),
|
||||
|
||||
( # + in title and uses id="main-image" for cover
|
||||
{'title':'C++ Concurrency in Action'},
|
||||
[title_test('C++ Concurrency in Action: Practical Multithreading',
|
||||
@ -911,8 +919,8 @@ if __name__ == '__main__': # tests {{{
|
||||
( # Series
|
||||
{'identifiers':{'amazon':'0756407117'}},
|
||||
[title_test(
|
||||
"Throne of the Crescent Moon"
|
||||
, exact=True), series_test('Crescent Moon Kingdoms', 1),
|
||||
"Throne of the Crescent Moon",
|
||||
exact=True), series_test('Crescent Moon Kingdoms', 1),
|
||||
comments_test('Makhslood'),
|
||||
]
|
||||
),
|
||||
@ -920,8 +928,8 @@ if __name__ == '__main__': # tests {{{
|
||||
( # Different comments markup, using Book Description section
|
||||
{'identifiers':{'amazon':'0982514506'}},
|
||||
[title_test(
|
||||
"Griffin's Destiny: Book Three: The Griffin's Daughter Trilogy"
|
||||
, exact=True),
|
||||
"Griffin's Destiny: Book Three: The Griffin's Daughter Trilogy",
|
||||
exact=True),
|
||||
comments_test('Jelena'), comments_test('Leslie'),
|
||||
]
|
||||
),
|
||||
@ -1004,6 +1012,11 @@ if __name__ == '__main__': # tests {{{
|
||||
] # }}}
|
||||
|
||||
jp_tests = [ # {{{
|
||||
( # Adult filtering test
|
||||
{'identifiers':{'isbn':'4799500066'}},
|
||||
[title_test(u'Bitch Trap'),]
|
||||
),
|
||||
|
||||
( # isbn -> title, authors
|
||||
{'identifiers':{'isbn': '9784101302720'}},
|
||||
[title_test(u'精霊の守り人',
|
||||
|
@ -31,7 +31,7 @@ msprefs.defaults['find_first_edition_date'] = False
|
||||
# Google covers are often poor quality (scans/errors) but they have high
|
||||
# resolution, so they trump covers from better sources. So make sure they
|
||||
# are only used if no other covers are found.
|
||||
msprefs.defaults['cover_priorities'] = {'Google':2, 'Google Images':2}
|
||||
msprefs.defaults['cover_priorities'] = {'Google':2, 'Google Images':2, 'Big Book Search':2}
|
||||
|
||||
def create_log(ostream=None):
|
||||
from calibre.utils.logging import ThreadSafeLog, FileStream
|
||||
@ -429,6 +429,40 @@ class Source(Plugin):
|
||||
mi.tags = list(map(fixcase, mi.tags))
|
||||
mi.isbn = check_isbn(mi.isbn)
|
||||
|
||||
def download_multiple_covers(self, title, authors, urls, get_best_cover, timeout, result_queue, abort, log, prefs_name='max_covers'):
|
||||
if not urls:
|
||||
log('No images found for, title: %r and authors: %r'%(title, authors))
|
||||
return
|
||||
from threading import Thread
|
||||
import time
|
||||
if prefs_name:
|
||||
urls = urls[:self.prefs[prefs_name]]
|
||||
if get_best_cover:
|
||||
urls = urls[:1]
|
||||
log('Downloading %d covers'%len(urls))
|
||||
workers = [Thread(target=self.download_image, args=(u, timeout, log, result_queue)) for u in urls]
|
||||
for w in workers:
|
||||
w.daemon = True
|
||||
w.start()
|
||||
alive = True
|
||||
start_time = time.time()
|
||||
while alive and not abort.is_set() and time.time() - start_time < timeout:
|
||||
alive = False
|
||||
for w in workers:
|
||||
if w.is_alive():
|
||||
alive = True
|
||||
break
|
||||
abort.wait(0.1)
|
||||
|
||||
def download_image(self, url, timeout, log, result_queue):
|
||||
try:
|
||||
ans = self.browser.open_novisit(url, timeout=timeout).read()
|
||||
result_queue.put((self, ans))
|
||||
log('Downloaded cover from: %s'%url)
|
||||
except Exception:
|
||||
self.log.exception('Failed to download cover from: %r'%url)
|
||||
|
||||
|
||||
# }}}
|
||||
|
||||
# Metadata API {{{
|
||||
|
58
src/calibre/ebooks/metadata/sources/big_book_search.py
Normal file
58
src/calibre/ebooks/metadata/sources/big_book_search.py
Normal file
@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
from calibre.ebooks.metadata.sources.base import Source, Option
|
||||
|
||||
def get_urls(br, tokens):
|
||||
from urllib import quote_plus
|
||||
from mechanize import Request
|
||||
from lxml import html
|
||||
escaped = [quote_plus(x.encode('utf-8')) for x in tokens if x and x.strip()]
|
||||
q = b'+'.join(escaped)
|
||||
url = 'http://bigbooksearch.com/books/'+q
|
||||
br.open(url).read()
|
||||
req = Request('http://bigbooksearch.com/query.php?SearchIndex=books&Keywords=%s&ItemPage=1'%q)
|
||||
req.add_header('X-Requested-With', 'XMLHttpRequest')
|
||||
req.add_header('Referer', url)
|
||||
raw = br.open(req).read()
|
||||
root = html.fromstring(raw.decode('utf-8'))
|
||||
urls = [i.get('src') for i in root.xpath('//img[@src]')]
|
||||
return urls
|
||||
|
||||
class BigBookSearch(Source):
|
||||
|
||||
name = 'Big Book Search'
|
||||
description = _('Downloads multiple book covers from Amazon. Useful to find alternate covers.')
|
||||
capabilities = frozenset(['cover'])
|
||||
config_help_message = _('Configure the Big Book Search plugin')
|
||||
can_get_multiple_covers = True
|
||||
options = (Option('max_covers', 'number', 5, _('Maximum number of covers to get'),
|
||||
_('The maximum number of covers to process from the search result')),
|
||||
)
|
||||
supports_gzip_transfer_encoding = True
|
||||
|
||||
def download_cover(self, log, result_queue, abort,
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
if not title:
|
||||
return
|
||||
br = self.browser
|
||||
tokens = tuple(self.get_title_tokens(title)) + tuple(self.get_author_tokens(authors))
|
||||
urls = get_urls(br, tokens)
|
||||
self.download_multiple_covers(title, authors, urls, get_best_cover, timeout, result_queue, abort, log)
|
||||
|
||||
def test():
|
||||
from calibre import browser
|
||||
import pprint
|
||||
br = browser()
|
||||
urls = get_urls(br, ['consider', 'phlebas', 'banks'])
|
||||
pprint.pprint(urls)
|
||||
|
||||
if __name__ == '__main__':
|
||||
test()
|
||||
|
@ -18,12 +18,13 @@ from calibre.utils.magick.draw import Image, save_cover_data_to
|
||||
|
||||
class Worker(Thread):
|
||||
|
||||
def __init__(self, plugin, abort, title, authors, identifiers, timeout, rq):
|
||||
def __init__(self, plugin, abort, title, authors, identifiers, timeout, rq, get_best_cover=False):
|
||||
Thread.__init__(self)
|
||||
self.daemon = True
|
||||
|
||||
self.plugin = plugin
|
||||
self.abort = abort
|
||||
self.get_best_cover = get_best_cover
|
||||
self.buf = BytesIO()
|
||||
self.log = create_log(self.buf)
|
||||
self.title, self.authors, self.identifiers = (title, authors,
|
||||
@ -37,7 +38,7 @@ class Worker(Thread):
|
||||
try:
|
||||
if self.plugin.can_get_multiple_covers:
|
||||
self.plugin.download_cover(self.log, self.rq, self.abort,
|
||||
title=self.title, authors=self.authors, get_best_cover=True,
|
||||
title=self.title, authors=self.authors, get_best_cover=self.get_best_cover,
|
||||
identifiers=self.identifiers, timeout=self.timeout)
|
||||
else:
|
||||
self.plugin.download_cover(self.log, self.rq, self.abort,
|
||||
@ -72,7 +73,7 @@ def process_result(log, result):
|
||||
return (plugin, width, height, fmt, data)
|
||||
|
||||
def run_download(log, results, abort,
|
||||
title=None, authors=None, identifiers={}, timeout=30):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
'''
|
||||
Run the cover download, putting results into the queue :param:`results`.
|
||||
|
||||
@ -89,7 +90,7 @@ def run_download(log, results, abort,
|
||||
plugins = [p for p in metadata_plugins(['cover']) if p.is_configured()]
|
||||
|
||||
rq = Queue()
|
||||
workers = [Worker(p, abort, title, authors, identifiers, timeout, rq) for p
|
||||
workers = [Worker(p, abort, title, authors, identifiers, timeout, rq, get_best_cover=get_best_cover) for p
|
||||
in plugins]
|
||||
for w in workers:
|
||||
w.start()
|
||||
@ -163,7 +164,7 @@ def download_cover(log,
|
||||
abort = Event()
|
||||
|
||||
run_download(log, rq, abort, title=title, authors=authors,
|
||||
identifiers=identifiers, timeout=timeout)
|
||||
identifiers=identifiers, timeout=timeout, get_best_cover=True)
|
||||
|
||||
results = []
|
||||
|
||||
|
@ -106,6 +106,8 @@ class Worker(Thread): # {{{
|
||||
parts = pub.partition(':')[0::2]
|
||||
pub = parts[1] or parts[0]
|
||||
try:
|
||||
if ', Ship Date:' in pub:
|
||||
pub = pub.partition(', Ship Date:')[0]
|
||||
q = parse_only_date(pub, assume_utc=True)
|
||||
if q.year != UNDEFINED_DATE:
|
||||
mi.pubdate = q
|
||||
|
@ -39,39 +39,11 @@ class GoogleImages(Source):
|
||||
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
|
||||
if not title:
|
||||
return
|
||||
from threading import Thread
|
||||
import time
|
||||
timeout = max(60, timeout) # Needs at least a minute
|
||||
title = ' '.join(self.get_title_tokens(title))
|
||||
author = ' '.join(self.get_author_tokens(authors))
|
||||
urls = self.get_image_urls(title, author, log, abort, timeout)
|
||||
if not urls:
|
||||
log('No images found in Google for, title: %r and authors: %r'%(title, author))
|
||||
return
|
||||
urls = urls[:self.prefs['max_covers']]
|
||||
if get_best_cover:
|
||||
urls = urls[:1]
|
||||
workers = [Thread(target=self.download_image, args=(url, timeout, log, result_queue)) for url in urls]
|
||||
for w in workers:
|
||||
w.daemon = True
|
||||
w.start()
|
||||
alive = True
|
||||
start_time = time.time()
|
||||
while alive and not abort.is_set() and time.time() - start_time < timeout:
|
||||
alive = False
|
||||
for w in workers:
|
||||
if w.is_alive():
|
||||
alive = True
|
||||
break
|
||||
abort.wait(0.1)
|
||||
|
||||
def download_image(self, url, timeout, log, result_queue):
|
||||
try:
|
||||
ans = self.browser.open_novisit(url, timeout=timeout).read()
|
||||
result_queue.put((self, ans))
|
||||
log('Downloaded cover from: %s'%url)
|
||||
except Exception:
|
||||
self.log.exception('Failed to download cover from: %r'%url)
|
||||
self.download_multiple_covers(title, authors, urls, get_best_cover, timeout, result_queue, abort, log)
|
||||
|
||||
def get_image_urls(self, title, author, log, abort, timeout):
|
||||
from calibre.utils.ipc.simple_worker import fork_job, WorkerError
|
||||
|
@ -51,9 +51,11 @@ def reverse_tag_iter(block):
|
||||
end = len(block)
|
||||
while True:
|
||||
pgt = block.rfind(b'>', 0, end)
|
||||
if pgt == -1: break
|
||||
if pgt == -1:
|
||||
break
|
||||
plt = block.rfind(b'<', 0, pgt)
|
||||
if plt == -1: break
|
||||
if plt == -1:
|
||||
break
|
||||
yield block[plt:pgt+1]
|
||||
end = plt
|
||||
|
||||
@ -231,12 +233,12 @@ class Mobi8Reader(object):
|
||||
flowpart = self.flows[j]
|
||||
nstr = '%04d' % j
|
||||
m = svg_tag_pattern.search(flowpart)
|
||||
if m != None:
|
||||
if m is not None:
|
||||
# svg
|
||||
typ = 'svg'
|
||||
start = m.start()
|
||||
m2 = image_tag_pattern.search(flowpart)
|
||||
if m2 != None:
|
||||
if m2 is not None:
|
||||
format = 'inline'
|
||||
dir = None
|
||||
fname = None
|
||||
@ -406,6 +408,10 @@ class Mobi8Reader(object):
|
||||
else:
|
||||
imgtype = what(None, data)
|
||||
if imgtype is None:
|
||||
from calibre.utils.magick.draw import identify_data
|
||||
try:
|
||||
imgtype = identify_data(data)[2]
|
||||
except Exception:
|
||||
imgtype = 'unknown'
|
||||
href = 'images/%05d.%s'%(fname_idx, imgtype)
|
||||
with open(href.replace('/', os.sep), 'wb') as f:
|
||||
|
@ -72,7 +72,8 @@ def explode(path, dest, question=lambda x:True):
|
||||
dest), no_output=True)['result']
|
||||
|
||||
def set_cover(oeb):
|
||||
if 'cover' not in oeb.guide or oeb.metadata['cover']: return
|
||||
if 'cover' not in oeb.guide or oeb.metadata['cover']:
|
||||
return
|
||||
cover = oeb.guide['cover']
|
||||
if cover.href in oeb.manifest.hrefs:
|
||||
item = oeb.manifest.hrefs[cover.href]
|
||||
@ -95,8 +96,9 @@ def rebuild(src_dir, dest_path):
|
||||
if not opf:
|
||||
raise ValueError('No OPF file found in %s'%src_dir)
|
||||
opf = opf[0]
|
||||
# For debugging, uncomment the following line
|
||||
# def fork_job(a, b, args=None, no_output=True): do_rebuild(*args)
|
||||
# For debugging, uncomment the following two lines
|
||||
# def fork_job(a, b, args=None, no_output=True):
|
||||
# do_rebuild(*args)
|
||||
fork_job('calibre.ebooks.mobi.tweak', 'do_rebuild', args=(opf, dest_path),
|
||||
no_output=True)
|
||||
|
||||
|
@ -69,7 +69,8 @@ class Resources(object):
|
||||
cover_href = item.href
|
||||
|
||||
for item in self.oeb.manifest.values():
|
||||
if item.media_type not in OEB_RASTER_IMAGES: continue
|
||||
if item.media_type not in OEB_RASTER_IMAGES:
|
||||
continue
|
||||
try:
|
||||
data = self.process_image(item.data)
|
||||
except:
|
||||
@ -116,8 +117,8 @@ class Resources(object):
|
||||
Add any images that were created after the call to add_resources()
|
||||
'''
|
||||
for item in self.oeb.manifest.values():
|
||||
if (item.media_type not in OEB_RASTER_IMAGES or item.href in
|
||||
self.item_map): continue
|
||||
if (item.media_type not in OEB_RASTER_IMAGES or item.href in self.item_map):
|
||||
continue
|
||||
try:
|
||||
data = self.process_image(item.data)
|
||||
except:
|
||||
|
@ -270,7 +270,7 @@ BINARY_MIME = 'application/octet-stream'
|
||||
|
||||
XHTML_CSS_NAMESPACE = u'@namespace "%s";\n' % XHTML_NS
|
||||
|
||||
OEB_STYLES = set([CSS_MIME, OEB_CSS_MIME, 'text/x-oeb-css'])
|
||||
OEB_STYLES = set([CSS_MIME, OEB_CSS_MIME, 'text/x-oeb-css', 'xhtml/css'])
|
||||
OEB_DOCS = set([XHTML_MIME, 'text/html', OEB_DOC_MIME,
|
||||
'text/x-oeb-document'])
|
||||
OEB_RASTER_IMAGES = set([GIF_MIME, JPEG_MIME, PNG_MIME])
|
||||
|
@ -43,8 +43,8 @@ sizes, adjust margins, etc. Every action performs only the minimum set of
|
||||
changes needed for the desired effect.</p>
|
||||
|
||||
<p>You should use this tool as the last step in your ebook creation process.</p>
|
||||
|
||||
<p>Note that polishing only works on files in the %s formats.</p>
|
||||
{0}
|
||||
<p>Note that polishing only works on files in the %s formats.</p>\
|
||||
''')%_(' or ').join('<b>%s</b>'%x for x in SUPPORTED),
|
||||
|
||||
'subset': _('''\
|
||||
@ -69,7 +69,7 @@ text might not be covered by the subset font.</p>
|
||||
'jacket': _('''\
|
||||
<p>Insert a "book jacket" page at the start of the book that contains
|
||||
all the book metadata such as title, tags, authors, series, comments,
|
||||
etc.</p>'''),
|
||||
etc. Any previous book jacket will be replaced.</p>'''),
|
||||
|
||||
'remove_jacket': _('''\
|
||||
<p>Remove a previous inserted book jacket page.</p>
|
||||
@ -85,7 +85,7 @@ when single quotes at the start of contractions are involved.</p>
|
||||
|
||||
def hfix(name, raw):
|
||||
if name == 'about':
|
||||
return raw
|
||||
return raw.format('')
|
||||
raw = raw.replace('\n\n', '__XX__')
|
||||
raw = raw.replace('\n', ' ')
|
||||
raw = raw.replace('__XX__', '\n')
|
||||
|
@ -180,5 +180,6 @@ class BorderParse:
|
||||
elif 'single' in border_style_list:
|
||||
new_border_dict[att] = 'single'
|
||||
else:
|
||||
if border_style_list:
|
||||
new_border_dict[att] = border_style_list[0]
|
||||
return new_border_dict
|
||||
|
@ -10,8 +10,7 @@ from functools import partial
|
||||
from threading import Thread
|
||||
from contextlib import closing
|
||||
|
||||
from PyQt4.Qt import (QToolButton, QDialog, QGridLayout, QIcon, QLabel,
|
||||
QCheckBox, QDialogButtonBox)
|
||||
from PyQt4.Qt import (QToolButton, QDialog, QGridLayout, QIcon, QLabel, QDialogButtonBox)
|
||||
|
||||
from calibre.gui2.actions import InterfaceAction
|
||||
from calibre.gui2 import (error_dialog, Dispatcher, warning_dialog, gprefs,
|
||||
@ -71,8 +70,10 @@ class Worker(Thread): # {{{
|
||||
mi.timestamp = now()
|
||||
self.progress(i, mi.title)
|
||||
fmts = self.db.formats(x, index_is_id=True)
|
||||
if not fmts: fmts = []
|
||||
else: fmts = fmts.split(',')
|
||||
if not fmts:
|
||||
fmts = []
|
||||
else:
|
||||
fmts = fmts.split(',')
|
||||
paths = []
|
||||
for fmt in fmts:
|
||||
p = self.db.format(x, fmt, index_is_id=True,
|
||||
@ -146,12 +147,19 @@ class ChooseLibrary(QDialog): # {{{
|
||||
b.setToolTip(_('Browse for library'))
|
||||
b.clicked.connect(self.browse)
|
||||
l.addWidget(b, 0, 2)
|
||||
self.c = c = QCheckBox(_('&Delete after copy'))
|
||||
l.addWidget(c, 1, 0, 1, 3)
|
||||
self.bb = bb = QDialogButtonBox(QDialogButtonBox.Ok|QDialogButtonBox.Cancel)
|
||||
self.bb = bb = QDialogButtonBox(QDialogButtonBox.Cancel)
|
||||
bb.accepted.connect(self.accept)
|
||||
bb.rejected.connect(self.reject)
|
||||
l.addWidget(bb, 2, 0, 1, 3)
|
||||
self.delete_after_copy = False
|
||||
b = bb.addButton(_('&Copy'), bb.AcceptRole)
|
||||
b.setIcon(QIcon(I('edit-copy.png')))
|
||||
b.setToolTip(_('Copy to the specified library'))
|
||||
b2 = bb.addButton(_('&Move'), bb.AcceptRole)
|
||||
b2.clicked.connect(lambda: setattr(self, 'delete_after_copy', True))
|
||||
b2.setIcon(QIcon(I('edit-cut.png')))
|
||||
b2.setToolTip(_('Copy to the specified library and delete from the current library'))
|
||||
b.setDefault(True)
|
||||
l.addWidget(bb, 1, 0, 1, 3)
|
||||
le.setMinimumWidth(350)
|
||||
self.resize(self.sizeHint())
|
||||
|
||||
@ -163,7 +171,7 @@ class ChooseLibrary(QDialog): # {{{
|
||||
|
||||
@property
|
||||
def args(self):
|
||||
return (unicode(self.le.text()), self.c.isChecked())
|
||||
return (unicode(self.le.text()), self.delete_after_copy)
|
||||
# }}}
|
||||
|
||||
class CopyToLibraryAction(InterfaceAction):
|
||||
@ -214,6 +222,8 @@ class CopyToLibraryAction(InterfaceAction):
|
||||
d = ChooseLibrary(self.gui)
|
||||
if d.exec_() == d.Accepted:
|
||||
path, delete_after = d.args
|
||||
if not path:
|
||||
return
|
||||
db = self.gui.library_view.model().db
|
||||
current = os.path.normcase(os.path.abspath(db.library_path))
|
||||
if current == os.path.normcase(os.path.abspath(path)):
|
||||
|
@ -180,6 +180,13 @@ class DeleteAction(InterfaceAction):
|
||||
self.gui.library_view.currentIndex())
|
||||
self.gui.tags_view.recount()
|
||||
|
||||
def restore_format(self, book_id, original_fmt):
|
||||
self.gui.current_db.restore_original_format(book_id, original_fmt)
|
||||
self.gui.library_view.model().refresh_ids([book_id])
|
||||
self.gui.library_view.model().current_changed(self.gui.library_view.currentIndex(),
|
||||
self.gui.library_view.currentIndex())
|
||||
self.gui.tags_view.recount()
|
||||
|
||||
def delete_selected_formats(self, *args):
|
||||
ids = self._get_selected_ids()
|
||||
if not ids:
|
||||
|
@ -279,7 +279,7 @@ class EditMetadataAction(InterfaceAction):
|
||||
'''
|
||||
Edit metadata of selected books in library in bulk.
|
||||
'''
|
||||
rows = [r.row() for r in \
|
||||
rows = [r.row() for r in
|
||||
self.gui.library_view.selectionModel().selectedRows()]
|
||||
m = self.gui.library_view.model()
|
||||
ids = [m.id(r) for r in rows]
|
||||
@ -470,38 +470,32 @@ class EditMetadataAction(InterfaceAction):
|
||||
db.set_cover(dest_id, dest_cover)
|
||||
|
||||
for key in db.field_metadata: # loop thru all defined fields
|
||||
if db.field_metadata[key]['is_custom']:
|
||||
colnum = db.field_metadata[key]['colnum']
|
||||
fm = db.field_metadata[key]
|
||||
if not fm['is_custom']:
|
||||
continue
|
||||
dt = fm['datatype']
|
||||
colnum = fm['colnum']
|
||||
# Get orig_dest_comments before it gets changed
|
||||
if db.field_metadata[key]['datatype'] == 'comments':
|
||||
if dt == 'comments':
|
||||
orig_dest_value = db.get_custom(dest_id, num=colnum, index_is_id=True)
|
||||
|
||||
for src_id in src_ids:
|
||||
dest_value = db.get_custom(dest_id, num=colnum, index_is_id=True)
|
||||
src_value = db.get_custom(src_id, num=colnum, index_is_id=True)
|
||||
if db.field_metadata[key]['datatype'] == 'comments':
|
||||
if src_value and src_value != orig_dest_value:
|
||||
if (dt == 'comments' and src_value and src_value != orig_dest_value):
|
||||
if not dest_value:
|
||||
db.set_custom(dest_id, src_value, num=colnum)
|
||||
else:
|
||||
dest_value = unicode(dest_value) + u'\n\n' + unicode(src_value)
|
||||
db.set_custom(dest_id, dest_value, num=colnum)
|
||||
if db.field_metadata[key]['datatype'] in \
|
||||
('bool', 'int', 'float', 'rating', 'datetime') \
|
||||
and dest_value is None:
|
||||
if (dt in {'bool', 'int', 'float', 'rating', 'datetime'} and dest_value is None):
|
||||
db.set_custom(dest_id, src_value, num=colnum)
|
||||
if db.field_metadata[key]['datatype'] == 'series' \
|
||||
and not dest_value:
|
||||
if src_value:
|
||||
if (dt == 'series' and not dest_value and src_value):
|
||||
src_index = db.get_custom_extra(src_id, num=colnum, index_is_id=True)
|
||||
db.set_custom(dest_id, src_value, num=colnum, extra=src_index)
|
||||
if (db.field_metadata[key]['datatype'] == 'enumeration' or
|
||||
(db.field_metadata[key]['datatype'] == 'text' and
|
||||
not db.field_metadata[key]['is_multiple'])
|
||||
and not dest_value):
|
||||
if (dt == 'enumeration' or (dt == 'text' and not fm['is_multiple']) and not dest_value):
|
||||
db.set_custom(dest_id, src_value, num=colnum)
|
||||
if db.field_metadata[key]['datatype'] == 'text' \
|
||||
and db.field_metadata[key]['is_multiple']:
|
||||
if src_value:
|
||||
if (dt == 'text' and fm['is_multiple'] and src_value):
|
||||
if not dest_value:
|
||||
dest_value = src_value
|
||||
else:
|
||||
@ -585,7 +579,6 @@ class EditMetadataAction(InterfaceAction):
|
||||
self.apply_pd.value += 1
|
||||
QTimer.singleShot(50, self.do_one_apply)
|
||||
|
||||
|
||||
def apply_mi(self, book_id, mi):
|
||||
db = self.gui.current_db
|
||||
|
||||
|
@ -37,7 +37,13 @@ class Polish(QDialog): # {{{
|
||||
self.setWindowTitle(title)
|
||||
|
||||
self.help_text = {
|
||||
'polish': _('<h3>About Polishing books</h3>%s')%HELP['about'],
|
||||
'polish': _('<h3>About Polishing books</h3>%s')%HELP['about'].format(
|
||||
_('''<p>If you have both EPUB and ORIGINAL_EPUB in your book,
|
||||
then polishing will run on ORIGINAL_EPUB (the same for other
|
||||
ORIGINAL_* formats). So if you
|
||||
want Polishing to not run on the ORIGINAL_* format, delete the
|
||||
ORIGINAL_* format before running it.</p>''')
|
||||
),
|
||||
|
||||
'subset':_('<h3>Subsetting fonts</h3>%s')%HELP['subset'],
|
||||
|
||||
|
@ -88,9 +88,7 @@ class StoreAction(InterfaceAction):
|
||||
if row == None:
|
||||
error_dialog(self.gui, _('Cannot search'), _('No book selected'), show=True)
|
||||
return
|
||||
|
||||
query = 'author:"%s"' % self._get_author(row)
|
||||
self.search(query)
|
||||
self.search({ 'author': self._get_author(row) })
|
||||
|
||||
def _get_title(self, row):
|
||||
title = ''
|
||||
@ -107,18 +105,14 @@ class StoreAction(InterfaceAction):
|
||||
if row == None:
|
||||
error_dialog(self.gui, _('Cannot search'), _('No book selected'), show=True)
|
||||
return
|
||||
|
||||
query = 'title:"%s"' % self._get_title(row)
|
||||
self.search(query)
|
||||
self.search({ 'title': self._get_title(row) })
|
||||
|
||||
def search_author_title(self):
|
||||
row = self._get_selected_row()
|
||||
if row == None:
|
||||
error_dialog(self.gui, _('Cannot search'), _('No book selected'), show=True)
|
||||
return
|
||||
|
||||
query = 'author:"%s" title:"%s"' % (self._get_author(row), self._get_title(row))
|
||||
self.search(query)
|
||||
self.search({ 'author': self._get_author(row), 'title': self._get_title(row) })
|
||||
|
||||
def choose(self):
|
||||
from calibre.gui2.store.config.chooser.chooser_dialog import StoreChooserDialog
|
||||
|
@ -405,6 +405,7 @@ class BookInfo(QWebView):
|
||||
link_clicked = pyqtSignal(object)
|
||||
remove_format = pyqtSignal(int, object)
|
||||
save_format = pyqtSignal(int, object)
|
||||
restore_format = pyqtSignal(int, object)
|
||||
|
||||
def __init__(self, vertical, parent=None):
|
||||
QWebView.__init__(self, parent)
|
||||
@ -418,7 +419,7 @@ class BookInfo(QWebView):
|
||||
palette.setBrush(QPalette.Base, Qt.transparent)
|
||||
self.page().setPalette(palette)
|
||||
self.css = P('templates/book_details.css', data=True).decode('utf-8')
|
||||
for x, icon in [('remove', 'trash.png'), ('save', 'save.png')]:
|
||||
for x, icon in [('remove', 'trash.png'), ('save', 'save.png'), ('restore', 'edit-undo.png')]:
|
||||
ac = QAction(QIcon(I(icon)), '', self)
|
||||
ac.current_fmt = None
|
||||
ac.triggered.connect(getattr(self, '%s_format_triggerred'%x))
|
||||
@ -436,6 +437,9 @@ class BookInfo(QWebView):
|
||||
def save_format_triggerred(self):
|
||||
self.context_action_triggered('save')
|
||||
|
||||
def restore_format_triggerred(self):
|
||||
self.context_action_triggered('restore')
|
||||
|
||||
def link_activated(self, link):
|
||||
self._link_clicked = True
|
||||
if unicode(link.scheme()) in ('http', 'https'):
|
||||
@ -479,7 +483,11 @@ class BookInfo(QWebView):
|
||||
traceback.print_exc()
|
||||
else:
|
||||
for a, t in [('remove', _('Delete the %s format')),
|
||||
('save', _('Save the %s format to disk'))]:
|
||||
('save', _('Save the %s format to disk')),
|
||||
('restore', _('Restore the %s format')),
|
||||
]:
|
||||
if a == 'restore' and not fmt.upper().startswith('ORIGINAL_'):
|
||||
continue
|
||||
ac = getattr(self, '%s_format_action'%a)
|
||||
ac.current_fmt = (book_id, fmt)
|
||||
ac.setText(t%parts[2])
|
||||
@ -585,6 +593,7 @@ class BookDetails(QWidget): # {{{
|
||||
view_specific_format = pyqtSignal(int, object)
|
||||
remove_specific_format = pyqtSignal(int, object)
|
||||
save_specific_format = pyqtSignal(int, object)
|
||||
restore_specific_format = pyqtSignal(int, object)
|
||||
remote_file_dropped = pyqtSignal(object, object)
|
||||
files_dropped = pyqtSignal(object, object)
|
||||
cover_changed = pyqtSignal(object, object)
|
||||
@ -654,6 +663,7 @@ class BookDetails(QWidget): # {{{
|
||||
self.book_info.link_clicked.connect(self.handle_click)
|
||||
self.book_info.remove_format.connect(self.remove_specific_format)
|
||||
self.book_info.save_format.connect(self.save_specific_format)
|
||||
self.book_info.restore_format.connect(self.restore_specific_format)
|
||||
self.setCursor(Qt.PointingHandCursor)
|
||||
|
||||
def handle_click(self, link):
|
||||
|
@ -272,6 +272,8 @@ class LayoutMixin(object): # {{{
|
||||
self.iactions['Remove Books'].remove_format_by_id)
|
||||
self.book_details.save_specific_format.connect(
|
||||
self.iactions['Save To Disk'].save_library_format_by_ids)
|
||||
self.book_details.restore_specific_format.connect(
|
||||
self.iactions['Remove Books'].restore_format)
|
||||
self.book_details.view_device_book.connect(
|
||||
self.iactions['View'].view_device_book)
|
||||
|
||||
|
@ -18,7 +18,8 @@ from calibre.gui2.dialogs.message_box import ViewLog
|
||||
|
||||
Question = namedtuple('Question', 'payload callback cancel_callback '
|
||||
'title msg html_log log_viewer_title log_is_file det_msg '
|
||||
'show_copy_button checkbox_msg checkbox_checked')
|
||||
'show_copy_button checkbox_msg checkbox_checked action_callback '
|
||||
'action_label action_icon')
|
||||
|
||||
class ProceedQuestion(QDialog):
|
||||
|
||||
@ -51,6 +52,8 @@ class ProceedQuestion(QDialog):
|
||||
self.copy_button = self.bb.addButton(_('&Copy to clipboard'),
|
||||
self.bb.ActionRole)
|
||||
self.copy_button.clicked.connect(self.copy_to_clipboard)
|
||||
self.action_button = self.bb.addButton('', self.bb.ActionRole)
|
||||
self.action_button.clicked.connect(self.action_clicked)
|
||||
self.show_det_msg = _('Show &details')
|
||||
self.hide_det_msg = _('Hide &details')
|
||||
self.det_msg_toggle = self.bb.addButton(self.show_det_msg, self.bb.ActionRole)
|
||||
@ -81,6 +84,12 @@ class ProceedQuestion(QDialog):
|
||||
unicode(self.det_msg.toPlainText())))
|
||||
self.copy_button.setText(_('Copied'))
|
||||
|
||||
def action_clicked(self):
|
||||
if self.questions:
|
||||
q = self.questions[0]
|
||||
self.questions[0] = q._replace(callback=q.action_callback)
|
||||
self.accept()
|
||||
|
||||
def accept(self):
|
||||
if self.questions:
|
||||
payload, callback, cancel_callback = self.questions[0][:3]
|
||||
@ -123,13 +132,19 @@ class ProceedQuestion(QDialog):
|
||||
self.resize(sz)
|
||||
|
||||
def show_question(self):
|
||||
if self.isVisible(): return
|
||||
if self.isVisible():
|
||||
return
|
||||
if self.questions:
|
||||
question = self.questions[0]
|
||||
self.msg_label.setText(question.msg)
|
||||
self.setWindowTitle(question.title)
|
||||
self.log_button.setVisible(bool(question.html_log))
|
||||
self.copy_button.setVisible(bool(question.show_copy_button))
|
||||
self.action_button.setVisible(question.action_callback is not None)
|
||||
if question.action_callback is not None:
|
||||
self.action_button.setText(question.action_label or '')
|
||||
self.action_button.setIcon(
|
||||
QIcon() if question.action_icon is None else question.action_icon)
|
||||
self.det_msg.setPlainText(question.det_msg or '')
|
||||
self.det_msg.setVisible(False)
|
||||
self.det_msg_toggle.setVisible(bool(question.det_msg))
|
||||
@ -145,7 +160,8 @@ class ProceedQuestion(QDialog):
|
||||
|
||||
def __call__(self, callback, payload, html_log, log_viewer_title, title,
|
||||
msg, det_msg='', show_copy_button=False, cancel_callback=None,
|
||||
log_is_file=False, checkbox_msg=None, checkbox_checked=False):
|
||||
log_is_file=False, checkbox_msg=None, checkbox_checked=False,
|
||||
action_callback=None, action_label=None, action_icon=None):
|
||||
'''
|
||||
A non modal popup that notifies the user that a background task has
|
||||
been completed. This class guarantees that only a single popup is
|
||||
@ -170,11 +186,19 @@ class ProceedQuestion(QDialog):
|
||||
called with both the payload and the state of the
|
||||
checkbox as arguments.
|
||||
:param checkbox_checked: If True the checkbox is checked by default.
|
||||
:param action_callback: If not None, an extra button is added, which
|
||||
when clicked will cause action_callback to be called
|
||||
instead of callback. action_callback is called in
|
||||
exactly the same way as callback.
|
||||
:param action_label: The text on the action button
|
||||
:param action_icon: The icon for the action button, must be a QIcon object or None
|
||||
|
||||
'''
|
||||
question = Question(payload, callback, cancel_callback, title, msg,
|
||||
html_log, log_viewer_title, log_is_file, det_msg,
|
||||
show_copy_button, checkbox_msg, checkbox_checked)
|
||||
question = Question(
|
||||
payload, callback, cancel_callback, title, msg, html_log,
|
||||
log_viewer_title, log_is_file, det_msg, show_copy_button,
|
||||
checkbox_msg, checkbox_checked, action_callback, action_label,
|
||||
action_icon)
|
||||
self.questions.append(question)
|
||||
self.show_question()
|
||||
|
||||
|
@ -62,16 +62,20 @@ class SearchDialog(QDialog, Ui_Dialog):
|
||||
self.setup_store_checks()
|
||||
|
||||
# Set the search query
|
||||
if isinstance(query, (str, unicode)):
|
||||
self.search_edit.setText(query)
|
||||
elif isinstance(query, dict):
|
||||
if 'author' in query:
|
||||
self.search_author.setText(query['author'])
|
||||
if 'title' in query:
|
||||
self.search_title.setText(query['title'])
|
||||
# Title
|
||||
self.search_title.setText(query)
|
||||
self.search_title.setSizeAdjustPolicy(QComboBox.AdjustToMinimumContentsLengthWithIcon)
|
||||
self.search_title.setMinimumContentsLength(25)
|
||||
# Author
|
||||
self.search_author.setText(query)
|
||||
self.search_author.setSizeAdjustPolicy(QComboBox.AdjustToMinimumContentsLengthWithIcon)
|
||||
self.search_author.setMinimumContentsLength(25)
|
||||
# Keyword
|
||||
self.search_edit.setText(query)
|
||||
self.search_edit.setSizeAdjustPolicy(QComboBox.AdjustToMinimumContentsLengthWithIcon)
|
||||
self.search_edit.setMinimumContentsLength(25)
|
||||
|
||||
@ -408,7 +412,7 @@ class SearchDialog(QDialog, Ui_Dialog):
|
||||
self.save_state()
|
||||
|
||||
def exec_(self):
|
||||
if unicode(self.search_edit.text()).strip():
|
||||
if unicode(self.search_edit.text()).strip() or unicode(self.search_title.text()).strip() or unicode(self.search_author.text()).strip():
|
||||
self.do_search()
|
||||
return QDialog.exec_(self)
|
||||
|
||||
|
@ -1,91 +1,104 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||
__copyright__ = '2011, 2013, John Schember <john@nachtimwald.com>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import base64
|
||||
import mimetypes
|
||||
import re
|
||||
import urllib
|
||||
from contextlib import closing
|
||||
|
||||
from lxml import html
|
||||
from lxml import etree
|
||||
|
||||
from PyQt4.Qt import QUrl
|
||||
|
||||
from calibre import browser, random_user_agent, url_slash_cleaner
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store import StorePlugin
|
||||
from calibre import browser, url_slash_cleaner
|
||||
from calibre.constants import __version__
|
||||
from calibre.gui2.store.basic_config import BasicStoreConfig
|
||||
from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||
|
||||
class GutenbergStore(BasicStoreConfig, StorePlugin):
|
||||
class GutenbergStore(BasicStoreConfig, OpenSearchOPDSStore):
|
||||
|
||||
def open(self, parent=None, detail_item=None, external=False):
|
||||
url = 'http://gutenberg.org/'
|
||||
|
||||
if detail_item:
|
||||
detail_item = url_slash_cleaner(url + detail_item)
|
||||
|
||||
if external or self.config.get('open_external', False):
|
||||
open_url(QUrl(detail_item if detail_item else url))
|
||||
else:
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_item)
|
||||
d.setWindowTitle(self.name)
|
||||
d.set_tags(self.config.get('tags', ''))
|
||||
d.exec_()
|
||||
open_search_url = 'http://www.gutenberg.org/catalog/osd-books.xml'
|
||||
web_url = 'http://m.gutenberg.org/'
|
||||
|
||||
def search(self, query, max_results=10, timeout=60):
|
||||
url = 'http://m.gutenberg.org/ebooks/search.mobile/?default_prefix=all&sort_order=title&query=' + urllib.quote_plus(query)
|
||||
'''
|
||||
Gutenberg's ODPS feed is poorly implmented and has a number of issues
|
||||
which require very special handling to fix the results.
|
||||
|
||||
br = browser(user_agent=random_user_agent())
|
||||
Issues:
|
||||
* "Sort Alphabetically" and "Sort by Release Date" are returned
|
||||
as book entries.
|
||||
* The author is put into a "content" tag and not the author tag.
|
||||
* The link to the book itself goes to an odps page which we need
|
||||
to turn into a link to a web page.
|
||||
* acquisition links are not part of the search result so we have
|
||||
to go to the odps item itself. Detail item pages have a nasty
|
||||
note saying:
|
||||
DON'T USE THIS PAGE FOR SCRAPING.
|
||||
Seriously. You'll only get your IP blocked.
|
||||
We're using the ODPS feed because people are getting blocked with
|
||||
the previous implementation so due to this using ODPS probably
|
||||
won't solve this issue.
|
||||
* Images are not links but base64 encoded strings. They are also not
|
||||
real cover images but a little blue book thumbnail.
|
||||
'''
|
||||
|
||||
url = 'http://m.gutenberg.org/ebooks/search.opds/?query=' + urllib.quote_plus(query)
|
||||
|
||||
counter = max_results
|
||||
br = browser(user_agent='calibre/'+__version__)
|
||||
with closing(br.open(url, timeout=timeout)) as f:
|
||||
doc = html.fromstring(f.read())
|
||||
for data in doc.xpath('//ol[@class="results"]/li[@class="booklink"]'):
|
||||
doc = etree.fromstring(f.read())
|
||||
for data in doc.xpath('//*[local-name() = "entry"]'):
|
||||
if counter <= 0:
|
||||
break
|
||||
|
||||
id = ''.join(data.xpath('./a/@href'))
|
||||
id = id.split('.mobile')[0]
|
||||
|
||||
title = ''.join(data.xpath('.//span[@class="title"]/text()'))
|
||||
author = ''.join(data.xpath('.//span[@class="subtitle"]/text()'))
|
||||
|
||||
counter -= 1
|
||||
|
||||
s = SearchResult()
|
||||
s.cover_url = ''
|
||||
|
||||
s.detail_item = id.strip()
|
||||
s.title = title.strip()
|
||||
s.author = author.strip()
|
||||
s.price = '$0.00'
|
||||
s.drm = SearchResult.DRM_UNLOCKED
|
||||
# We could use the <link rel="alternate" type="text/html" ...> tag from the
|
||||
# detail odps page but this is easier.
|
||||
id = ''.join(data.xpath('./*[local-name() = "id"]/text()')).strip()
|
||||
s.detail_item = url_slash_cleaner('%s/ebooks/%s' % (self.web_url, re.sub('[^\d]', '', id)))
|
||||
if not s.detail_item:
|
||||
continue
|
||||
|
||||
yield s
|
||||
|
||||
def get_details(self, search_result, timeout):
|
||||
url = url_slash_cleaner('http://m.gutenberg.org/' + search_result.detail_item)
|
||||
|
||||
br = browser(user_agent=random_user_agent())
|
||||
with closing(br.open(url, timeout=timeout)) as nf:
|
||||
doc = html.fromstring(nf.read())
|
||||
|
||||
for save_item in doc.xpath('//li[contains(@class, "icon_save")]/a'):
|
||||
type = save_item.get('type')
|
||||
href = save_item.get('href')
|
||||
s.title = ' '.join(data.xpath('./*[local-name() = "title"]//text()')).strip()
|
||||
s.author = ', '.join(data.xpath('./*[local-name() = "content"]//text()')).strip()
|
||||
if not s.title or not s.author:
|
||||
continue
|
||||
|
||||
# Get the formats and direct download links.
|
||||
with closing(br.open(id, timeout=timeout/4)) as nf:
|
||||
ndoc = etree.fromstring(nf.read())
|
||||
for link in ndoc.xpath('//*[local-name() = "link" and @rel = "http://opds-spec.org/acquisition"]'):
|
||||
type = link.get('type')
|
||||
href = link.get('href')
|
||||
if type:
|
||||
ext = mimetypes.guess_extension(type)
|
||||
if ext:
|
||||
ext = ext[1:].upper().strip()
|
||||
search_result.downloads[ext] = href
|
||||
s.downloads[ext] = href
|
||||
|
||||
search_result.formats = ', '.join(search_result.downloads.keys())
|
||||
s.formats = ', '.join(s.downloads.keys())
|
||||
if not s.formats:
|
||||
continue
|
||||
|
||||
return True
|
||||
for link in data.xpath('./*[local-name() = "link"]'):
|
||||
rel = link.get('rel')
|
||||
href = link.get('href')
|
||||
type = link.get('type')
|
||||
|
||||
if rel and href and type:
|
||||
if rel in ('http://opds-spec.org/thumbnail', 'http://opds-spec.org/image/thumbnail'):
|
||||
if href.startswith('data:image/png;base64,'):
|
||||
s.cover_data = base64.b64decode(href.replace('data:image/png;base64,', ''))
|
||||
|
||||
yield s
|
||||
|
82
src/calibre/gui2/store/stores/koobe_plugin.py
Normal file
82
src/calibre/gui2/store/stores/koobe_plugin.py
Normal file
@ -0,0 +1,82 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2013, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import urllib
|
||||
from base64 import b64encode
|
||||
from contextlib import closing
|
||||
|
||||
from lxml import html
|
||||
|
||||
from PyQt4.Qt import QUrl
|
||||
|
||||
from calibre import browser, url_slash_cleaner
|
||||
from calibre.gui2 import open_url
|
||||
from calibre.gui2.store import StorePlugin
|
||||
from calibre.gui2.store.basic_config import BasicStoreConfig
|
||||
from calibre.gui2.store.search_result import SearchResult
|
||||
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||
|
||||
class KoobeStore(BasicStoreConfig, StorePlugin):
|
||||
|
||||
def open(self, parent=None, detail_item=None, external=False):
|
||||
aff_root = 'https://www.a4b-tracking.com/pl/stat-click-text-link/15/58/'
|
||||
url = 'http://www.koobe.pl/'
|
||||
|
||||
aff_url = aff_root + str(b64encode(url))
|
||||
|
||||
detail_url = None
|
||||
if detail_item:
|
||||
detail_url = aff_root + str(b64encode(detail_item))
|
||||
|
||||
if external or self.config.get('open_external', False):
|
||||
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else aff_url)))
|
||||
else:
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_url if detail_url else aff_url)
|
||||
d.setWindowTitle(self.name)
|
||||
d.set_tags(self.config.get('tags', ''))
|
||||
d.exec_()
|
||||
|
||||
def search(self, query, max_results=10, timeout=60):
|
||||
|
||||
br = browser()
|
||||
page=1
|
||||
|
||||
counter = max_results
|
||||
while counter:
|
||||
with closing(br.open('http://www.koobe.pl/s,p,' + str(page) + ',szukaj/fraza:' + urllib.quote(query), timeout=timeout)) as f:
|
||||
doc = html.fromstring(f.read().decode('utf-8'))
|
||||
for data in doc.xpath('//div[@class="seach_result"]/div[@class="result"]'):
|
||||
if counter <= 0:
|
||||
break
|
||||
|
||||
id = ''.join(data.xpath('.//div[@class="cover"]/a/@href'))
|
||||
if not id:
|
||||
continue
|
||||
|
||||
cover_url = ''.join(data.xpath('.//div[@class="cover"]/a/img/@src'))
|
||||
price = ''.join(data.xpath('.//span[@class="current_price"]/text()'))
|
||||
title = ''.join(data.xpath('.//h2[@class="title"]/a/text()'))
|
||||
author = ''.join(data.xpath('.//h3[@class="book_author"]/a/text()'))
|
||||
formats = ', '.join(data.xpath('.//div[@class="formats"]/div/div/@title'))
|
||||
|
||||
counter -= 1
|
||||
|
||||
s = SearchResult()
|
||||
s.cover_url = 'http://koobe.pl/' + cover_url
|
||||
s.title = title.strip()
|
||||
s.author = author.strip()
|
||||
s.price = price
|
||||
s.detail_item = 'http://koobe.pl' + id[1:]
|
||||
s.formats = formats.upper()
|
||||
s.drm = SearchResult.DRM_UNLOCKED
|
||||
|
||||
yield s
|
||||
if not doc.xpath('//div[@class="site_bottom"]//a[@class="right"]'):
|
||||
break
|
||||
page+=1
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011-2013, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
@ -67,7 +67,7 @@ class NextoStore(BasicStoreConfig, StorePlugin):
|
||||
|
||||
cover_url = ''.join(data.xpath('.//img[@class="cover"]/@src'))
|
||||
cover_url = re.sub(r'%2F', '/', cover_url)
|
||||
cover_url = re.sub(r'\widthMax=120&heightMax=200', 'widthMax=64&heightMax=64', cover_url)
|
||||
cover_url = re.sub(r'widthMax=120&heightMax=200', 'widthMax=64&heightMax=64', cover_url)
|
||||
title = ''.join(data.xpath('.//a[@class="title"]/text()'))
|
||||
title = re.sub(r' - ebook$', '', title)
|
||||
formats = ', '.join(data.xpath('.//ul[@class="formats_available"]/li//b/text()'))
|
||||
@ -82,7 +82,7 @@ class NextoStore(BasicStoreConfig, StorePlugin):
|
||||
counter -= 1
|
||||
|
||||
s = SearchResult()
|
||||
s.cover_url = 'http://www.nexto.pl' + cover_url
|
||||
s.cover_url = cover_url if cover_url[:4] == 'http' else 'http://www.nexto.pl' + cover_url
|
||||
s.title = title.strip()
|
||||
s.author = author.strip()
|
||||
s.price = price
|
||||
|
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
store_version = 3 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011-2013, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
@ -41,7 +41,7 @@ class VirtualoStore(BasicStoreConfig, StorePlugin):
|
||||
url = 'http://virtualo.pl/?q=' + urllib.quote(query) + '&f=format_id:4,6,3'
|
||||
|
||||
br = browser()
|
||||
no_drm_pattern = re.compile("Znak wodny")
|
||||
no_drm_pattern = re.compile(r'Znak wodny|Brak')
|
||||
|
||||
counter = max_results
|
||||
with closing(br.open(url, timeout=timeout)) as f:
|
||||
@ -58,8 +58,8 @@ class VirtualoStore(BasicStoreConfig, StorePlugin):
|
||||
cover_url = ''.join(data.xpath('.//div[@class="list_middle_left"]//a//img/@src'))
|
||||
title = ''.join(data.xpath('.//div[@class="list_title list_text_left"]/a/text()'))
|
||||
author = ', '.join(data.xpath('.//div[@class="list_authors list_text_left"]/a/text()'))
|
||||
formats = [ form.split('_')[-1].replace('.png', '') for form in data.xpath('.//div[@style="width:55%;float:left;text-align:left;height:18px;"]//a/img/@src')]
|
||||
nodrm = no_drm_pattern.search(''.join(data.xpath('.//div[@style="width:45%;float:right;text-align:right;height:18px;"]/div/div/text()')))
|
||||
formats = [ form.split('_')[-1].replace('.png', '') for form in data.xpath('.//div[@style="width:55%;float:left;text-align:left;height:18px;"]//a/span/img/@src')]
|
||||
nodrm = no_drm_pattern.search(''.join(data.xpath('.//div[@style="width:45%;float:right;text-align:right;height:18px;"]//span[@class="prompt_preview"]/text()')))
|
||||
|
||||
counter -= 1
|
||||
|
||||
@ -70,6 +70,6 @@ class VirtualoStore(BasicStoreConfig, StorePlugin):
|
||||
s.price = price + ' zł'
|
||||
s.detail_item = 'http://virtualo.pl' + id.strip().split('http://')[0]
|
||||
s.formats = ', '.join(formats).upper()
|
||||
s.drm = SearchResult.DRM_UNLOCKED if nodrm else SearchResult.DRM_UNKNOWN
|
||||
s.drm = SearchResult.DRM_UNLOCKED if nodrm else SearchResult.DRM_LOCKED
|
||||
|
||||
yield s
|
||||
|
@ -1,14 +1,15 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||
store_version = 1 # Needed for dynamic plugin loading
|
||||
store_version = 2 # Needed for dynamic plugin loading
|
||||
|
||||
__license__ = 'GPL 3'
|
||||
__copyright__ = '2011-2012, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
__copyright__ = '2011-2013, Tomasz Długosz <tomek3d@gmail.com>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import re
|
||||
import urllib
|
||||
from base64 import b64encode
|
||||
from contextlib import closing
|
||||
|
||||
from lxml import html
|
||||
@ -25,17 +26,19 @@ from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||
class WoblinkStore(BasicStoreConfig, StorePlugin):
|
||||
|
||||
def open(self, parent=None, detail_item=None, external=False):
|
||||
|
||||
aff_root = 'https://www.a4b-tracking.com/pl/stat-click-text-link/16/58/'
|
||||
url = 'http://woblink.com/publication'
|
||||
|
||||
aff_url = aff_root + str(b64encode(url))
|
||||
detail_url = None
|
||||
|
||||
if detail_item:
|
||||
detail_url = 'http://woblink.com' + detail_item
|
||||
detail_url = aff_root + str(b64encode('http://woblink.com' + detail_item))
|
||||
|
||||
if external or self.config.get('open_external', False):
|
||||
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else url)))
|
||||
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else aff_url)))
|
||||
else:
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_url)
|
||||
d = WebStoreDialog(self.gui, url, parent, detail_url if detail_url else aff_url)
|
||||
d.setWindowTitle(self.name)
|
||||
d.set_tags(self.config.get('tags', ''))
|
||||
d.exec_()
|
||||
|
@ -559,11 +559,11 @@ class TOCView(QWidget): # {{{
|
||||
b.setToolTip(_('Remove all selected entries'))
|
||||
b.clicked.connect(self.del_items)
|
||||
|
||||
self.left_button = b = QToolButton(self)
|
||||
self.right_button = b = QToolButton(self)
|
||||
b.setIcon(QIcon(I('forward.png')))
|
||||
b.setIconSize(QSize(ICON_SIZE, ICON_SIZE))
|
||||
l.addWidget(b, 4, 3)
|
||||
b.setToolTip(_('Unindent the current entry [Ctrl+Left]'))
|
||||
b.setToolTip(_('Indent the current entry [Ctrl+Right]'))
|
||||
b.clicked.connect(self.tocw.move_right)
|
||||
|
||||
self.down_button = b = QToolButton(self)
|
||||
|
@ -54,7 +54,7 @@ def get_parser(usage):
|
||||
def get_db(dbpath, options):
|
||||
global do_notify
|
||||
if options.library_path is not None:
|
||||
dbpath = options.library_path
|
||||
dbpath = os.path.expanduser(options.library_path)
|
||||
if dbpath is None:
|
||||
raise ValueError('No saved library path, either run the GUI or use the'
|
||||
' --with-library option')
|
||||
@ -164,7 +164,8 @@ List the books available in the calibre database.
|
||||
parser.add_option('--ascending', default=False, action='store_true',
|
||||
help=_('Sort results in ascending order'))
|
||||
parser.add_option('-s', '--search', default=None,
|
||||
help=_('Filter the results by the search query. For the format of the search query, please see the search related documentation in the User Manual. Default is to do no filtering.'))
|
||||
help=_('Filter the results by the search query. For the format of the search query,'
|
||||
' please see the search related documentation in the User Manual. Default is to do no filtering.'))
|
||||
parser.add_option('-w', '--line-width', default=-1, type=int,
|
||||
help=_('The maximum width of a single line in the output. Defaults to detecting screen size.'))
|
||||
parser.add_option('--separator', default=' ', help=_('The string used to separate fields. Default is a space.'))
|
||||
@ -244,7 +245,8 @@ def do_add(db, paths, one_book_per_directory, recurse, add_duplicates, otitle,
|
||||
mi.authors = [_('Unknown')]
|
||||
for x in ('title', 'authors', 'isbn', 'tags', 'series'):
|
||||
val = locals()['o'+x]
|
||||
if val: setattr(mi, x, val)
|
||||
if val:
|
||||
setattr(mi, x, val)
|
||||
if oseries:
|
||||
mi.series_index = oseries_index
|
||||
if ocover:
|
||||
@ -425,18 +427,26 @@ def command_remove(args, dbpath):
|
||||
|
||||
return 0
|
||||
|
||||
def do_add_format(db, id, fmt, path):
|
||||
db.add_format_with_hooks(id, fmt.upper(), path, index_is_id=True)
|
||||
def do_add_format(db, id, fmt, path, opts):
|
||||
done = db.add_format_with_hooks(id, fmt.upper(), path, index_is_id=True,
|
||||
replace=opts.replace)
|
||||
if not done and not opts.replace:
|
||||
prints(_('A %s file already exists for book: %d, not replacing')%(fmt.upper(), id))
|
||||
else:
|
||||
send_message()
|
||||
|
||||
def add_format_option_parser():
|
||||
return get_parser(_(
|
||||
parser = get_parser(_(
|
||||
'''\
|
||||
%prog add_format [options] id ebook_file
|
||||
|
||||
Add the ebook in ebook_file to the available formats for the logical book identified \
|
||||
by id. You can get id by using the list command. If the format already exists, it is replaced.
|
||||
by id. You can get id by using the list command. If the format already exists, \
|
||||
it is replaced, unless the do not replace option is specified.\
|
||||
'''))
|
||||
parser.add_option('--dont-replace', dest='replace', default=True, action='store_false',
|
||||
help=_('Do not replace the format if it already exists'))
|
||||
return parser
|
||||
|
||||
|
||||
def command_add_format(args, dbpath):
|
||||
@ -451,7 +461,7 @@ def command_add_format(args, dbpath):
|
||||
id, path, fmt = int(args[1]), args[2], os.path.splitext(args[2])[-1]
|
||||
if not fmt:
|
||||
print _('ebook file must have an extension')
|
||||
do_add_format(get_db(dbpath, opts), id, fmt[1:], path)
|
||||
do_add_format(get_db(dbpath, opts), id, fmt[1:], path, opts)
|
||||
return 0
|
||||
|
||||
def do_remove_format(db, id, fmt):
|
||||
@ -1214,7 +1224,8 @@ def command_restore_database(args, dbpath):
|
||||
dbpath = dbpath.decode(preferred_encoding)
|
||||
|
||||
class Progress(object):
|
||||
def __init__(self): self.total = 1
|
||||
def __init__(self):
|
||||
self.total = 1
|
||||
|
||||
def __call__(self, msg, step):
|
||||
if msg is None:
|
||||
|
@ -352,7 +352,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
'''.format(_('News')))
|
||||
self.conn.commit()
|
||||
|
||||
|
||||
CustomColumns.__init__(self)
|
||||
template = '''\
|
||||
(SELECT {query} FROM books_{table}_link AS link INNER JOIN
|
||||
@ -1476,12 +1475,12 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
return ret
|
||||
|
||||
def add_format_with_hooks(self, index, format, fpath, index_is_id=False,
|
||||
path=None, notify=True):
|
||||
path=None, notify=True, replace=True):
|
||||
npath = self.run_import_plugins(fpath, format)
|
||||
format = os.path.splitext(npath)[-1].lower().replace('.', '').upper()
|
||||
stream = lopen(npath, 'rb')
|
||||
format = check_ebook_format(stream, format)
|
||||
retval = self.add_format(index, format, stream,
|
||||
retval = self.add_format(index, format, stream, replace=replace,
|
||||
index_is_id=index_is_id, path=path, notify=notify)
|
||||
run_plugins_on_postimport(self, id, format)
|
||||
return retval
|
||||
@ -1489,7 +1488,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
def add_format(self, index, format, stream, index_is_id=False, path=None,
|
||||
notify=True, replace=True, copy_function=None):
|
||||
id = index if index_is_id else self.id(index)
|
||||
if not format: format = ''
|
||||
if not format:
|
||||
format = ''
|
||||
self.format_metadata_cache[id].pop(format.upper(), None)
|
||||
name = self.format_filename_cache[id].get(format.upper(), None)
|
||||
if path is None:
|
||||
@ -1541,6 +1541,14 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
opath = self.format_abspath(book_id, nfmt, index_is_id=True)
|
||||
return fmt if opath is None else nfmt
|
||||
|
||||
def restore_original_format(self, book_id, original_fmt, notify=True):
|
||||
opath = self.format_abspath(book_id, original_fmt, index_is_id=True)
|
||||
if opath is not None:
|
||||
fmt = original_fmt.partition('_')[2]
|
||||
with lopen(opath, 'rb') as f:
|
||||
self.add_format(book_id, fmt, f, index_is_id=True, notify=False)
|
||||
self.remove_format(book_id, original_fmt, index_is_id=True, notify=notify)
|
||||
|
||||
def delete_book(self, id, notify=True, commit=True, permanent=False,
|
||||
do_clean=True):
|
||||
'''
|
||||
@ -1568,7 +1576,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
def remove_format(self, index, format, index_is_id=False, notify=True,
|
||||
commit=True, db_only=False):
|
||||
id = index if index_is_id else self.id(index)
|
||||
if not format: format = ''
|
||||
if not format:
|
||||
format = ''
|
||||
self.format_metadata_cache[id].pop(format.upper(), None)
|
||||
name = self.format_filename_cache[id].get(format.upper(), None)
|
||||
if name:
|
||||
@ -2327,7 +2336,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
identifiers[icu_lower(key)] = val
|
||||
self.set_identifiers(id, identifiers, notify=False, commit=False)
|
||||
|
||||
|
||||
user_mi = mi.get_all_user_metadata(make_copy=False)
|
||||
for key in user_mi.iterkeys():
|
||||
if key in self.field_metadata and \
|
||||
@ -2606,7 +2614,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
if notify:
|
||||
self.notify('metadata', [id])
|
||||
|
||||
|
||||
def set_publisher(self, id, publisher, notify=True, commit=True,
|
||||
allow_case_change=False):
|
||||
self.conn.execute('DELETE FROM books_publishers_link WHERE book=?',(id,))
|
||||
@ -2812,7 +2819,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
if new_id is None or old_id == new_id:
|
||||
new_id = old_id
|
||||
# New name doesn't exist. Simply change the old name
|
||||
self.conn.execute('UPDATE publishers SET name=? WHERE id=?', \
|
||||
self.conn.execute('UPDATE publishers SET name=? WHERE id=?',
|
||||
(new_name, old_id))
|
||||
else:
|
||||
# Change the link table to point at the new one
|
||||
@ -2852,7 +2859,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
self.conn.commit()
|
||||
|
||||
def set_sort_field_for_author(self, old_id, new_sort, commit=True, notify=False):
|
||||
self.conn.execute('UPDATE authors SET sort=? WHERE id=?', \
|
||||
self.conn.execute('UPDATE authors SET sort=? WHERE id=?',
|
||||
(new_sort.strip(), old_id))
|
||||
if commit:
|
||||
self.conn.commit()
|
||||
@ -2951,7 +2958,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
@classmethod
|
||||
def cleanup_tags(cls, tags):
|
||||
tags = [x.strip().replace(',', ';') for x in tags if x.strip()]
|
||||
tags = [x.decode(preferred_encoding, 'replace') \
|
||||
tags = [x.decode(preferred_encoding, 'replace')
|
||||
if isbytestring(x) else x for x in tags]
|
||||
tags = [u' '.join(x.split()) for x in tags]
|
||||
ans, seen = [], set([])
|
||||
@ -3355,7 +3362,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
self.data.refresh_ids(self, [db_id]) # Needed to update format list and size
|
||||
return db_id
|
||||
|
||||
|
||||
def add_news(self, path, arg):
|
||||
from calibre.ebooks.metadata.meta import get_metadata
|
||||
|
||||
@ -3455,7 +3461,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
traceback.print_exc()
|
||||
return id
|
||||
|
||||
|
||||
def add_books(self, paths, formats, metadata, add_duplicates=True,
|
||||
return_ids=False):
|
||||
'''
|
||||
@ -3643,7 +3648,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
||||
FIELDS.add('%d_index'%x)
|
||||
data = []
|
||||
for record in self.data:
|
||||
if record is None: continue
|
||||
if record is None:
|
||||
continue
|
||||
db_id = record[self.FIELD_MAP['id']]
|
||||
if ids is not None and db_id not in ids:
|
||||
continue
|
||||
@ -3763,7 +3769,7 @@ books_series_link feeds
|
||||
continue
|
||||
|
||||
key = os.path.splitext(path)[0]
|
||||
if not books.has_key(key):
|
||||
if key not in books:
|
||||
books[key] = []
|
||||
books[key].append(path)
|
||||
|
||||
|
@ -24,7 +24,7 @@ def stop_threaded_server(server):
|
||||
server.exit()
|
||||
server.thread = None
|
||||
|
||||
def create_wsgi_app(path_to_library=None, prefix=''):
|
||||
def create_wsgi_app(path_to_library=None, prefix='', virtual_library=None):
|
||||
'WSGI entry point'
|
||||
from calibre.library import db
|
||||
cherrypy.config.update({'environment': 'embedded'})
|
||||
@ -32,6 +32,7 @@ def create_wsgi_app(path_to_library=None, prefix=''):
|
||||
parser = option_parser()
|
||||
opts, args = parser.parse_args(['calibre-server'])
|
||||
opts.url_prefix = prefix
|
||||
opts.restriction = virtual_library
|
||||
server = LibraryServer(db, opts, wsgi=True, show_tracebacks=True)
|
||||
return cherrypy.Application(server, script_name=None, config=server.config)
|
||||
|
||||
@ -97,7 +98,6 @@ def daemonize(stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
|
||||
os.dup2(se.fileno(), sys.stderr.fileno())
|
||||
|
||||
|
||||
|
||||
def main(args=sys.argv):
|
||||
from calibre.library.database2 import LibraryDatabase2
|
||||
parser = option_parser()
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user