mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
Merge from trunk
This commit is contained in:
commit
313cd5543b
@ -19,6 +19,57 @@
|
||||
# new recipes:
|
||||
# - title:
|
||||
|
||||
- version: 0.9.13
|
||||
date: 2013-01-04
|
||||
|
||||
new features:
|
||||
- title: "Complete rewrite of the PDF Output engine, to support links and fix various bugs"
|
||||
type: major
|
||||
description: "calibre now has a new PDF output engine that supports links in the text. It also fixes various bugs, detailed below. In order to implement support for links and fix these bugs, the engine had to be completely rewritten, so there may be some regressions."
|
||||
|
||||
- title: "Show disabled device plugins in Preferences->Ignored Devices"
|
||||
|
||||
- title: "Get Books: Fix Smashwords, Google books and B&N stores. Add Nook UK store"
|
||||
|
||||
- title: "Allow series numbers lower than -100 for custom series columns."
|
||||
tickets: [1094475]
|
||||
|
||||
- title: "Add mass storage driver for rockhip based android smart phones"
|
||||
tickets: [1087809]
|
||||
|
||||
- title: "Add a clear ratings button to the edit metadata dialog"
|
||||
|
||||
bug fixes:
|
||||
- title: "PDF Output: Fix custom page sizes not working on OS X"
|
||||
|
||||
- title: "PDF Output: Fix embedding of many fonts not supported (note that embedding of OpenType fonts with Postscript outlines is still not supported on windows, though it is supported on other operating systems)"
|
||||
|
||||
- title: "PDF Output: Fix crashes converting some books to PDF on OS X"
|
||||
tickets: [1087688]
|
||||
|
||||
- title: "HTML Input: Handle entities inside href attributes when following the links in an HTML file."
|
||||
tickets: [1094203]
|
||||
|
||||
- title: "Content server: Fix custom icons not used for sub categories"
|
||||
tickets: [1095016]
|
||||
|
||||
- title: "Force use of non-unicode constants in compiled templates. Fixes a problem with regular expression character classes and probably other things."
|
||||
|
||||
- title: "Kobo driver: Do not error out if there are invalid dates in the device database"
|
||||
tickets: [1094597]
|
||||
|
||||
- title: "Content server: Fix for non-unicode hostnames when using mDNS"
|
||||
tickets: [1094063]
|
||||
|
||||
improved recipes:
|
||||
- Today's Zaman
|
||||
- The Economist
|
||||
- Foreign Affairs
|
||||
- New York Times
|
||||
- Alternet
|
||||
- Harper's Magazine
|
||||
- La Stampa
|
||||
|
||||
- version: 0.9.12
|
||||
date: 2012-12-28
|
||||
|
||||
|
@ -672,6 +672,19 @@ There are three possible things I know of, that can cause this:
|
||||
* The Logitech SetPoint Settings application causes random crashes in
|
||||
|app| when it is open. Close it before starting |app|.
|
||||
|
||||
If none of the above apply to you, then there is some other program on your
|
||||
computer that is interfering with |app|. First reboot your computer is safe
|
||||
mode, to have as few running programs as possible, and see if the crashes still
|
||||
happen. If they do not, then you know it is some program causing the problem.
|
||||
The most likely such culprit is a program that modifies other programs'
|
||||
behavior, such as an antivirus, a device driver, something like RoboForm (an
|
||||
automatic form filling app) or an assistive technology like Voice Control or a
|
||||
Screen Reader.
|
||||
|
||||
The only way to find the culprit is to eliminate the programs one by one and
|
||||
see which one is causing the issue. Basically, stop a program, run calibre,
|
||||
check for crashes. If they still happen, stop another program and repeat.
|
||||
|
||||
|app| is not starting on OS X?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -70,18 +70,6 @@ class Economist(BasicNewsRecipe):
|
||||
return br
|
||||
'''
|
||||
|
||||
def get_cover_url(self):
|
||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
||||
div = soup.find('div', attrs={'class':lambda x: x and
|
||||
'print-cover-links' in x})
|
||||
a = div.find('a', href=True)
|
||||
url = a.get('href')
|
||||
if url.startswith('/'):
|
||||
url = 'http://www.economist.com' + url
|
||||
soup = self.index_to_soup(url)
|
||||
div = soup.find('div', attrs={'class':'cover-content'})
|
||||
img = div.find('img', src=True)
|
||||
return img.get('src')
|
||||
|
||||
def parse_index(self):
|
||||
return self.economist_parse_index()
|
||||
@ -92,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
||||
if div is not None:
|
||||
img = div.find('img', src=True)
|
||||
if img is not None:
|
||||
self.cover_url = img['src']
|
||||
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||
feeds = OrderedDict()
|
||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||
x}):
|
||||
|
@ -9,7 +9,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
||||
from collections import OrderedDict
|
||||
|
||||
import time, re
|
||||
import re
|
||||
|
||||
class Economist(BasicNewsRecipe):
|
||||
|
||||
@ -37,7 +37,6 @@ class Economist(BasicNewsRecipe):
|
||||
padding: 7px 0px 9px;
|
||||
}
|
||||
'''
|
||||
|
||||
oldest_article = 7.0
|
||||
remove_tags = [
|
||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
||||
@ -46,7 +45,6 @@ class Economist(BasicNewsRecipe):
|
||||
{'class': lambda x: x and 'share-links-header' in x},
|
||||
]
|
||||
keep_only_tags = [dict(id='ec-article-body')]
|
||||
needs_subscription = False
|
||||
no_stylesheets = True
|
||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
||||
lambda x:'</html>')]
|
||||
@ -55,28 +53,26 @@ class Economist(BasicNewsRecipe):
|
||||
# downloaded with connection reset by peer (104) errors.
|
||||
delay = 1
|
||||
|
||||
def get_cover_url(self):
|
||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
||||
div = soup.find('div', attrs={'class':lambda x: x and
|
||||
'print-cover-links' in x})
|
||||
a = div.find('a', href=True)
|
||||
url = a.get('href')
|
||||
if url.startswith('/'):
|
||||
url = 'http://www.economist.com' + url
|
||||
soup = self.index_to_soup(url)
|
||||
div = soup.find('div', attrs={'class':'cover-content'})
|
||||
img = div.find('img', src=True)
|
||||
return img.get('src')
|
||||
needs_subscription = False
|
||||
'''
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser()
|
||||
if self.username and self.password:
|
||||
br.open('http://www.economist.com/user/login')
|
||||
br.select_form(nr=1)
|
||||
br['name'] = self.username
|
||||
br['pass'] = self.password
|
||||
res = br.submit()
|
||||
raw = res.read()
|
||||
if '>Log out<' not in raw:
|
||||
raise ValueError('Failed to login to economist.com. '
|
||||
'Check your username and password.')
|
||||
return br
|
||||
'''
|
||||
|
||||
|
||||
def parse_index(self):
|
||||
try:
|
||||
return self.economist_parse_index()
|
||||
except:
|
||||
raise
|
||||
self.log.warn(
|
||||
'Initial attempt to parse index failed, retrying in 30 seconds')
|
||||
time.sleep(30)
|
||||
return self.economist_parse_index()
|
||||
return self.economist_parse_index()
|
||||
|
||||
def economist_parse_index(self):
|
||||
soup = self.index_to_soup(self.INDEX)
|
||||
@ -84,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
||||
if div is not None:
|
||||
img = div.find('img', src=True)
|
||||
if img is not None:
|
||||
self.cover_url = img['src']
|
||||
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||
feeds = OrderedDict()
|
||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||
x}):
|
||||
@ -151,154 +147,3 @@ class Economist(BasicNewsRecipe):
|
||||
div.insert(2, img)
|
||||
table.replaceWith(div)
|
||||
return soup
|
||||
|
||||
'''
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.utils.threadpool import ThreadPool, makeRequests
|
||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
||||
import time, string, re
|
||||
from datetime import datetime
|
||||
from lxml import html
|
||||
|
||||
class Economist(BasicNewsRecipe):
|
||||
|
||||
title = 'The Economist (RSS)'
|
||||
language = 'en'
|
||||
|
||||
__author__ = "Kovid Goyal"
|
||||
description = ('Global news and current affairs from a European'
|
||||
' perspective. Best downloaded on Friday mornings (GMT).'
|
||||
' Much slower than the print edition based version.')
|
||||
extra_css = '.headline {font-size: x-large;} \n h2 { font-size: small; } \n h1 { font-size: medium; }'
|
||||
oldest_article = 7.0
|
||||
cover_url = 'http://media.economist.com/sites/default/files/imagecache/print-cover-thumbnail/print-covers/currentcoverus_large.jpg'
|
||||
#cover_url = 'http://www.economist.com/images/covers/currentcoverus_large.jpg'
|
||||
remove_tags = [
|
||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
||||
dict(attrs={'class':['dblClkTrk', 'ec-article-info',
|
||||
'share_inline_header', 'related-items']}),
|
||||
{'class': lambda x: x and 'share-links-header' in x},
|
||||
]
|
||||
keep_only_tags = [dict(id='ec-article-body')]
|
||||
no_stylesheets = True
|
||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
||||
lambda x:'</html>')]
|
||||
|
||||
def parse_index(self):
|
||||
from calibre.web.feeds.feedparser import parse
|
||||
if self.test:
|
||||
self.oldest_article = 14.0
|
||||
raw = self.index_to_soup(
|
||||
'http://feeds.feedburner.com/economist/full_print_edition',
|
||||
raw=True)
|
||||
entries = parse(raw).entries
|
||||
pool = ThreadPool(10)
|
||||
self.feed_dict = {}
|
||||
requests = []
|
||||
for i, item in enumerate(entries):
|
||||
title = item.get('title', _('Untitled article'))
|
||||
published = item.date_parsed
|
||||
if not published:
|
||||
published = time.gmtime()
|
||||
utctime = datetime(*published[:6])
|
||||
delta = datetime.utcnow() - utctime
|
||||
if delta.days*24*3600 + delta.seconds > 24*3600*self.oldest_article:
|
||||
self.log.debug('Skipping article %s as it is too old.'%title)
|
||||
continue
|
||||
link = item.get('link', None)
|
||||
description = item.get('description', '')
|
||||
author = item.get('author', '')
|
||||
|
||||
requests.append([i, link, title, description, author, published])
|
||||
if self.test:
|
||||
requests = requests[:4]
|
||||
requests = makeRequests(self.process_eco_feed_article, requests, self.eco_article_found,
|
||||
self.eco_article_failed)
|
||||
for r in requests: pool.putRequest(r)
|
||||
pool.wait()
|
||||
|
||||
return self.eco_sort_sections([(t, a) for t, a in
|
||||
self.feed_dict.items()])
|
||||
|
||||
def eco_sort_sections(self, feeds):
|
||||
if not feeds:
|
||||
raise ValueError('No new articles found')
|
||||
order = {
|
||||
'The World This Week': 1,
|
||||
'Leaders': 2,
|
||||
'Letters': 3,
|
||||
'Briefing': 4,
|
||||
'Business': 5,
|
||||
'Finance And Economics': 6,
|
||||
'Science & Technology': 7,
|
||||
'Books & Arts': 8,
|
||||
'International': 9,
|
||||
'United States': 10,
|
||||
'Asia': 11,
|
||||
'Europe': 12,
|
||||
'The Americas': 13,
|
||||
'Middle East & Africa': 14,
|
||||
'Britain': 15,
|
||||
'Obituary': 16,
|
||||
}
|
||||
return sorted(feeds, cmp=lambda x,y:cmp(order.get(x[0], 100),
|
||||
order.get(y[0], 100)))
|
||||
|
||||
def process_eco_feed_article(self, args):
|
||||
from calibre import browser
|
||||
i, url, title, description, author, published = args
|
||||
br = browser()
|
||||
ret = br.open(url)
|
||||
raw = ret.read()
|
||||
url = br.geturl().split('?')[0]+'/print'
|
||||
root = html.fromstring(raw)
|
||||
matches = root.xpath('//*[@class = "ec-article-info"]')
|
||||
feedtitle = 'Miscellaneous'
|
||||
if matches:
|
||||
feedtitle = string.capwords(html.tostring(matches[-1], method='text',
|
||||
encoding=unicode).split('|')[-1].strip())
|
||||
return (i, feedtitle, url, title, description, author, published)
|
||||
|
||||
def eco_article_found(self, req, result):
|
||||
from calibre.web.feeds import Article
|
||||
i, feedtitle, link, title, description, author, published = result
|
||||
self.log('Found print version for article:', title, 'in', feedtitle,
|
||||
'at', link)
|
||||
|
||||
a = Article(i, title, link, author, description, published, '')
|
||||
|
||||
article = dict(title=a.title, description=a.text_summary,
|
||||
date=time.strftime(self.timefmt, a.date), author=a.author, url=a.url)
|
||||
if feedtitle not in self.feed_dict:
|
||||
self.feed_dict[feedtitle] = []
|
||||
self.feed_dict[feedtitle].append(article)
|
||||
|
||||
def eco_article_failed(self, req, tb):
|
||||
self.log.error('Failed to download %s with error:'%req.args[0][2])
|
||||
self.log.debug(tb)
|
||||
|
||||
def eco_find_image_tables(self, soup):
|
||||
for x in soup.findAll('table', align=['right', 'center']):
|
||||
if len(x.findAll('font')) in (1,2) and len(x.findAll('img')) == 1:
|
||||
yield x
|
||||
|
||||
def postprocess_html(self, soup, first):
|
||||
body = soup.find('body')
|
||||
for name, val in body.attrs:
|
||||
del body[name]
|
||||
for table in list(self.eco_find_image_tables(soup)):
|
||||
caption = table.find('font')
|
||||
img = table.find('img')
|
||||
div = Tag(soup, 'div')
|
||||
div['style'] = 'text-align:left;font-size:70%'
|
||||
ns = NavigableString(self.tag_to_string(caption))
|
||||
div.insert(0, ns)
|
||||
div.insert(1, Tag(soup, 'br'))
|
||||
img.extract()
|
||||
del img['width']
|
||||
del img['height']
|
||||
div.insert(2, img)
|
||||
table.replaceWith(div)
|
||||
return soup
|
||||
'''
|
||||
|
||||
|
118
recipes/el_diplo.recipe
Normal file
118
recipes/el_diplo.recipe
Normal file
@ -0,0 +1,118 @@
|
||||
# Copyright 2013 Tomás Di Domenico
|
||||
#
|
||||
# This is a news fetching recipe for the Calibre ebook software, for
|
||||
# fetching the Cono Sur edition of Le Monde Diplomatique (www.eldiplo.org).
|
||||
#
|
||||
# This recipe is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This software is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this recipe. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import re
|
||||
from contextlib import closing
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
from calibre.ptempfile import PersistentTemporaryFile
|
||||
from calibre.utils.magick import Image
|
||||
|
||||
class ElDiplo_Recipe(BasicNewsRecipe):
|
||||
title = u'El Diplo'
|
||||
__author__ = 'Tomas Di Domenico'
|
||||
description = 'Publicacion mensual de Le Monde Diplomatique, edicion Argentina'
|
||||
langauge = 'es_AR'
|
||||
needs_subscription = True
|
||||
auto_cleanup = True
|
||||
|
||||
def get_cover(self,url):
|
||||
tmp_cover = PersistentTemporaryFile(suffix = ".jpg", prefix = "eldiplo_")
|
||||
self.cover_url = tmp_cover.name
|
||||
|
||||
with closing(self.browser.open(url)) as r:
|
||||
imgdata = r.read()
|
||||
|
||||
img = Image()
|
||||
img.load(imgdata)
|
||||
img.crop(img.size[0],img.size[1]/2,0,0)
|
||||
|
||||
img.save(tmp_cover.name)
|
||||
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser()
|
||||
if self.username is not None and self.password is not None:
|
||||
br.open('http://www.eldiplo.org/index.php/login/-/do_login/index.html')
|
||||
br.select_form(nr=3)
|
||||
br['uName'] = self.username
|
||||
br['uPassword'] = self.password
|
||||
br.submit()
|
||||
self.browser = br
|
||||
return br
|
||||
|
||||
def parse_index(self):
|
||||
default_sect = 'General'
|
||||
articles = {default_sect:[]}
|
||||
ans = [default_sect]
|
||||
sectionsmarker = 'DOSSIER_TITLE: '
|
||||
sectionsre = re.compile('^'+sectionsmarker)
|
||||
|
||||
soup = self.index_to_soup('http://www.eldiplo.org/index.php')
|
||||
|
||||
coverdivs = soup.findAll(True,attrs={'id':['lmd-foto']})
|
||||
a = coverdivs[0].find('a', href=True)
|
||||
coverurl = a['href'].split("?imagen=")[1]
|
||||
self.get_cover(coverurl)
|
||||
|
||||
thedivs = soup.findAll(True,attrs={'class':['lmd-leermas']})
|
||||
for div in thedivs:
|
||||
a = div.find('a', href=True)
|
||||
if 'Sumario completo' in self.tag_to_string(a, use_alt=True):
|
||||
summaryurl = re.sub(r'\?.*', '', a['href'])
|
||||
summaryurl = 'http://www.eldiplo.org' + summaryurl
|
||||
|
||||
for pagenum in xrange(1,10):
|
||||
soup = self.index_to_soup('{0}/?cms1_paging_p_b32={1}'.format(summaryurl,pagenum))
|
||||
thedivs = soup.findAll(True,attrs={'class':['interna']})
|
||||
|
||||
if len(thedivs) == 0:
|
||||
break
|
||||
|
||||
for div in thedivs:
|
||||
section = div.find(True,text=sectionsre).replace(sectionsmarker,'')
|
||||
if section == '':
|
||||
section = default_sect
|
||||
|
||||
if section not in articles.keys():
|
||||
articles[section] = []
|
||||
ans.append(section)
|
||||
|
||||
nota = div.find(True,attrs={'class':['lmd-pl-titulo-nota-dossier']})
|
||||
a = nota.find('a', href=True)
|
||||
if not a:
|
||||
continue
|
||||
|
||||
url = re.sub(r'\?.*', '', a['href'])
|
||||
url = 'http://www.eldiplo.org' + url
|
||||
title = self.tag_to_string(a, use_alt=True).strip()
|
||||
|
||||
summary = div.find(True, attrs={'class':'lmd-sumario-descript'}).find('p')
|
||||
if summary:
|
||||
description = self.tag_to_string(summary, use_alt=False)
|
||||
|
||||
aut = div.find(True, attrs={'class':'lmd-autor-sumario'})
|
||||
if aut:
|
||||
auth = self.tag_to_string(aut, use_alt=False).strip()
|
||||
|
||||
if not articles.has_key(section):
|
||||
articles[section] = []
|
||||
|
||||
articles[section].append(dict(title=title,author=auth,url=url,date=None,description=description,content=''))
|
||||
|
||||
#ans = self.sort_index_by(ans, {'The Front Page':-1, 'Dining In, Dining Out':1, 'Obituaries':2})
|
||||
ans = [(section, articles[section]) for section in ans if articles.has_key(section)]
|
||||
return ans
|
@ -18,7 +18,7 @@ class Fleshbot(BasicNewsRecipe):
|
||||
encoding = 'utf-8'
|
||||
use_embedded_content = True
|
||||
language = 'en'
|
||||
masthead_url = 'http://cache.gawkerassets.com/assets/kotaku.com/img/logo.png'
|
||||
masthead_url = 'http://fbassets.s3.amazonaws.com/images/uploads/2012/01/fleshbot-logo.png'
|
||||
extra_css = '''
|
||||
body{font-family: "Lucida Grande",Helvetica,Arial,sans-serif}
|
||||
img{margin-bottom: 1em}
|
||||
@ -31,7 +31,7 @@ class Fleshbot(BasicNewsRecipe):
|
||||
, 'language' : language
|
||||
}
|
||||
|
||||
feeds = [(u'Articles', u'http://feeds.gawker.com/fleshbot/vip?format=xml')]
|
||||
feeds = [(u'Articles', u'http://www.fleshbot.com/feed')]
|
||||
|
||||
remove_tags = [
|
||||
{'class': 'feedflare'},
|
||||
|
@ -28,12 +28,15 @@ class IlMessaggero(BasicNewsRecipe):
|
||||
recursion = 10
|
||||
|
||||
remove_javascript = True
|
||||
extra_css = ' .bianco31lucida{color: black} '
|
||||
|
||||
|
||||
keep_only_tags = [dict(name='h1', attrs={'class':'titoloLettura2'}),
|
||||
dict(name='h2', attrs={'class':'sottotitLettura'}),
|
||||
dict(name='span', attrs={'class':'testoArticoloG'})
|
||||
keep_only_tags = [dict(name='h1', attrs={'class':['titoloLettura2','titoloart','bianco31lucida']}),
|
||||
dict(name='h2', attrs={'class':['sottotitLettura','grigio16']}),
|
||||
dict(name='span', attrs={'class':'testoArticoloG'}),
|
||||
dict(name='div', attrs={'id':'testodim'})
|
||||
]
|
||||
|
||||
|
||||
def get_cover_url(self):
|
||||
cover = None
|
||||
st = time.localtime()
|
||||
@ -55,17 +58,16 @@ class IlMessaggero(BasicNewsRecipe):
|
||||
feeds = [
|
||||
(u'HomePage', u'http://www.ilmessaggero.it/rss/home.xml'),
|
||||
(u'Primo Piano', u'http://www.ilmessaggero.it/rss/initalia_primopiano.xml'),
|
||||
(u'Cronaca Bianca', u'http://www.ilmessaggero.it/rss/initalia_cronacabianca.xml'),
|
||||
(u'Cronaca Nera', u'http://www.ilmessaggero.it/rss/initalia_cronacanera.xml'),
|
||||
(u'Economia e Finanza', u'http://www.ilmessaggero.it/rss/economia.xml'),
|
||||
(u'Politica', u'http://www.ilmessaggero.it/rss/initalia_politica.xml'),
|
||||
(u'Scienza e Tecnologia', u'http://www.ilmessaggero.it/rss/scienza.xml'),
|
||||
(u'Cinema', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'),
|
||||
(u'Viaggi', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'),
|
||||
(u'Cultura', u'http://www.ilmessaggero.it/rss/cultura.xml'),
|
||||
(u'Tecnologia', u'http://www.ilmessaggero.it/rss/tecnologia.xml'),
|
||||
(u'Spettacoli', u'http://www.ilmessaggero.it/rss/spettacoli.xml'),
|
||||
(u'Edizioni Locali', u'http://www.ilmessaggero.it/rss/edlocali.xml'),
|
||||
(u'Roma', u'http://www.ilmessaggero.it/rss/roma.xml'),
|
||||
(u'Cultura e Tendenze', u'http://www.ilmessaggero.it/rss/roma_culturaspet.xml'),
|
||||
(u'Benessere', u'http://www.ilmessaggero.it/rss/benessere.xml'),
|
||||
(u'Sport', u'http://www.ilmessaggero.it/rss/sport.xml'),
|
||||
(u'Calcio', u'http://www.ilmessaggero.it/rss/sport_calcio.xml'),
|
||||
(u'Motori', u'http://www.ilmessaggero.it/rss/sport_motori.xml')
|
||||
(u'Moda', u'http://www.ilmessaggero.it/rss/moda.xml')
|
||||
]
|
||||
|
||||
|
||||
|
@ -14,7 +14,8 @@ class LiberoNews(BasicNewsRecipe):
|
||||
__author__ = 'Marini Gabriele'
|
||||
description = 'Italian daily newspaper'
|
||||
|
||||
cover_url = 'http://www.libero-news.it/images/logo.png'
|
||||
#cover_url = 'http://www.liberoquotidiano.it/images/Libero%20Quotidiano.jpg'
|
||||
cover_url = 'http://www.edicola.liberoquotidiano.it/vnlibero/fpcut.jsp?testata=milano'
|
||||
title = u'Libero '
|
||||
publisher = 'EDITORIALE LIBERO s.r.l 2006'
|
||||
category = 'News, politics, culture, economy, general interest'
|
||||
@ -32,10 +33,11 @@ class LiberoNews(BasicNewsRecipe):
|
||||
remove_javascript = True
|
||||
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'class':'Articolo'})
|
||||
dict(name='div', attrs={'class':'Articolo'}),
|
||||
dict(name='article')
|
||||
]
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'class':['CommentaFoto','Priva2']}),
|
||||
dict(name='div', attrs={'class':['CommentaFoto','Priva2','login_commenti','box_16']}),
|
||||
dict(name='div', attrs={'id':['commentigenerale']})
|
||||
]
|
||||
feeds = [
|
||||
|
@ -66,21 +66,22 @@ class NewYorkReviewOfBooks(BasicNewsRecipe):
|
||||
self.log('Issue date:', date)
|
||||
|
||||
# Find TOC
|
||||
toc = soup.find('ul', attrs={'class':'issue-article-list'})
|
||||
tocs = soup.findAll('ul', attrs={'class':'issue-article-list'})
|
||||
articles = []
|
||||
for li in toc.findAll('li'):
|
||||
h3 = li.find('h3')
|
||||
title = self.tag_to_string(h3)
|
||||
author = self.tag_to_string(li.find('h4'))
|
||||
title = title + u' (%s)'%author
|
||||
url = 'http://www.nybooks.com'+h3.find('a', href=True)['href']
|
||||
desc = ''
|
||||
for p in li.findAll('p'):
|
||||
desc += self.tag_to_string(p)
|
||||
self.log('Found article:', title)
|
||||
self.log('\t', url)
|
||||
self.log('\t', desc)
|
||||
articles.append({'title':title, 'url':url, 'date':'',
|
||||
for toc in tocs:
|
||||
for li in toc.findAll('li'):
|
||||
h3 = li.find('h3')
|
||||
title = self.tag_to_string(h3)
|
||||
author = self.tag_to_string(li.find('h4'))
|
||||
title = title + u' (%s)'%author
|
||||
url = 'http://www.nybooks.com'+h3.find('a', href=True)['href']
|
||||
desc = ''
|
||||
for p in li.findAll('p'):
|
||||
desc += self.tag_to_string(p)
|
||||
self.log('Found article:', title)
|
||||
self.log('\t', url)
|
||||
self.log('\t', desc)
|
||||
articles.append({'title':title, 'url':url, 'date':'',
|
||||
'description':desc})
|
||||
|
||||
return [('Current Issue', articles)]
|
||||
|
22
recipes/oxford_mail.recipe
Normal file
22
recipes/oxford_mail.recipe
Normal file
@ -0,0 +1,22 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class HindustanTimes(BasicNewsRecipe):
|
||||
title = u'Oxford Mail'
|
||||
language = 'en_GB'
|
||||
__author__ = 'Krittika Goyal'
|
||||
oldest_article = 1 #days
|
||||
max_articles_per_feed = 25
|
||||
#encoding = 'cp1252'
|
||||
use_embedded_content = False
|
||||
|
||||
no_stylesheets = True
|
||||
auto_cleanup = True
|
||||
|
||||
|
||||
feeds = [
|
||||
('News',
|
||||
'http://www.oxfordmail.co.uk/news/rss/'),
|
||||
('Sports',
|
||||
'http://www.oxfordmail.co.uk/sport/rss/'),
|
||||
]
|
||||
|
@ -26,28 +26,33 @@ class TodaysZaman_en(BasicNewsRecipe):
|
||||
# remove_attributes = ['width','height']
|
||||
|
||||
feeds = [
|
||||
( u'Home', u'http://www.todayszaman.com/rss?sectionId=0'),
|
||||
( u'News', u'http://www.todayszaman.com/rss?sectionId=100'),
|
||||
( u'Business', u'http://www.todayszaman.com/rss?sectionId=105'),
|
||||
( u'Interviews', u'http://www.todayszaman.com/rss?sectionId=8'),
|
||||
( u'Columnists', u'http://www.todayszaman.com/rss?sectionId=6'),
|
||||
( u'Op-Ed', u'http://www.todayszaman.com/rss?sectionId=109'),
|
||||
( u'Arts & Culture', u'http://www.todayszaman.com/rss?sectionId=110'),
|
||||
( u'Expat Zone', u'http://www.todayszaman.com/rss?sectionId=132'),
|
||||
( u'Sports', u'http://www.todayszaman.com/rss?sectionId=5'),
|
||||
( u'Features', u'http://www.todayszaman.com/rss?sectionId=116'),
|
||||
( u'Travel', u'http://www.todayszaman.com/rss?sectionId=117'),
|
||||
( u'Leisure', u'http://www.todayszaman.com/rss?sectionId=118'),
|
||||
( u'Weird But True', u'http://www.todayszaman.com/rss?sectionId=134'),
|
||||
( u'Life', u'http://www.todayszaman.com/rss?sectionId=133'),
|
||||
( u'Health', u'http://www.todayszaman.com/rss?sectionId=126'),
|
||||
( u'Press Review', u'http://www.todayszaman.com/rss?sectionId=130'),
|
||||
( u'Todays think tanks', u'http://www.todayszaman.com/rss?sectionId=159'),
|
||||
|
||||
]
|
||||
( u'Home', u'http://www.todayszaman.com/0.rss'),
|
||||
( u'Sports', u'http://www.todayszaman.com/5.rss'),
|
||||
( u'Columnists', u'http://www.todayszaman.com/6.rss'),
|
||||
( u'Interviews', u'http://www.todayszaman.com/9.rss'),
|
||||
( u'News', u'http://www.todayszaman.com/100.rss'),
|
||||
( u'National', u'http://www.todayszaman.com/101.rss'),
|
||||
( u'Diplomacy', u'http://www.todayszaman.com/102.rss'),
|
||||
( u'World', u'http://www.todayszaman.com/104.rss'),
|
||||
( u'Business', u'http://www.todayszaman.com/105.rss'),
|
||||
( u'Op-Ed', u'http://www.todayszaman.com/109.rss'),
|
||||
( u'Arts & Culture', u'http://www.todayszaman.com/110.rss'),
|
||||
( u'Features', u'http://www.todayszaman.com/116.rss'),
|
||||
( u'Travel', u'http://www.todayszaman.com/117.rss'),
|
||||
( u'Food', u'http://www.todayszaman.com/124.rss'),
|
||||
( u'Press Review', u'http://www.todayszaman.com/130.rss'),
|
||||
( u'Expat Zone', u'http://www.todayszaman.com/132.rss'),
|
||||
( u'Life', u'http://www.todayszaman.com/133.rss'),
|
||||
( u'Think Tanks', u'http://www.todayszaman.com/159.rss'),
|
||||
( u'Almanac', u'http://www.todayszaman.com/161.rss'),
|
||||
( u'Health', u'http://www.todayszaman.com/162.rss'),
|
||||
( u'Fashion & Beauty', u'http://www.todayszaman.com/163.rss'),
|
||||
( u'Science & Technology', u'http://www.todayszaman.com/349.rss'),
|
||||
]
|
||||
|
||||
#def preprocess_html(self, soup):
|
||||
# return self.adeify_images(soup)
|
||||
#def print_version(self, url): #there is a probem caused by table format
|
||||
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')
|
||||
|
||||
|
||||
|
@ -12,13 +12,13 @@ msgstr ""
|
||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||
"devel@lists.alioth.debian.org>\n"
|
||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||
"PO-Revision-Date: 2012-12-22 17:18+0000\n"
|
||||
"PO-Revision-Date: 2012-12-31 12:50+0000\n"
|
||||
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
|
||||
"Language-Team: Catalan <linux@softcatala.org>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"X-Launchpad-Export-Date: 2012-12-23 04:38+0000\n"
|
||||
"X-Launchpad-Export-Date: 2013-01-01 04:45+0000\n"
|
||||
"X-Generator: Launchpad (build 16378)\n"
|
||||
"Language: ca\n"
|
||||
|
||||
@ -1744,7 +1744,7 @@ msgstr "Asu (Nigèria)"
|
||||
|
||||
#. name for aun
|
||||
msgid "One; Molmo"
|
||||
msgstr "One; Molmo"
|
||||
msgstr "Oneià; Molmo"
|
||||
|
||||
#. name for auo
|
||||
msgid "Auyokawa"
|
||||
@ -1964,7 +1964,7 @@ msgstr "Leyigha"
|
||||
|
||||
#. name for ayk
|
||||
msgid "Akuku"
|
||||
msgstr "Akuku"
|
||||
msgstr "Okpe-Idesa-Akuku; Akuku"
|
||||
|
||||
#. name for ayl
|
||||
msgid "Arabic; Libyan"
|
||||
@ -9984,7 +9984,7 @@ msgstr "Indri"
|
||||
|
||||
#. name for ids
|
||||
msgid "Idesa"
|
||||
msgstr "Idesa"
|
||||
msgstr "Okpe-Idesa-Akuku; Idesa"
|
||||
|
||||
#. name for idt
|
||||
msgid "Idaté"
|
||||
@ -19524,7 +19524,7 @@ msgstr ""
|
||||
|
||||
#. name for obi
|
||||
msgid "Obispeño"
|
||||
msgstr ""
|
||||
msgstr "Obispeño"
|
||||
|
||||
#. name for obk
|
||||
msgid "Bontok; Southern"
|
||||
@ -19532,7 +19532,7 @@ msgstr "Bontoc; meridional"
|
||||
|
||||
#. name for obl
|
||||
msgid "Oblo"
|
||||
msgstr ""
|
||||
msgstr "Oblo"
|
||||
|
||||
#. name for obm
|
||||
msgid "Moabite"
|
||||
@ -19552,11 +19552,11 @@ msgstr "Bretó; antic"
|
||||
|
||||
#. name for obu
|
||||
msgid "Obulom"
|
||||
msgstr ""
|
||||
msgstr "Obulom"
|
||||
|
||||
#. name for oca
|
||||
msgid "Ocaina"
|
||||
msgstr ""
|
||||
msgstr "Ocaina"
|
||||
|
||||
#. name for och
|
||||
msgid "Chinese; Old"
|
||||
@ -19576,11 +19576,11 @@ msgstr "Matlazinca; Atzingo"
|
||||
|
||||
#. name for oda
|
||||
msgid "Odut"
|
||||
msgstr ""
|
||||
msgstr "Odut"
|
||||
|
||||
#. name for odk
|
||||
msgid "Od"
|
||||
msgstr ""
|
||||
msgstr "Od"
|
||||
|
||||
#. name for odt
|
||||
msgid "Dutch; Old"
|
||||
@ -19588,11 +19588,11 @@ msgstr "Holandès; antic"
|
||||
|
||||
#. name for odu
|
||||
msgid "Odual"
|
||||
msgstr ""
|
||||
msgstr "Odual"
|
||||
|
||||
#. name for ofo
|
||||
msgid "Ofo"
|
||||
msgstr ""
|
||||
msgstr "Ofo"
|
||||
|
||||
#. name for ofs
|
||||
msgid "Frisian; Old"
|
||||
@ -19604,11 +19604,11 @@ msgstr ""
|
||||
|
||||
#. name for ogb
|
||||
msgid "Ogbia"
|
||||
msgstr ""
|
||||
msgstr "Ogbia"
|
||||
|
||||
#. name for ogc
|
||||
msgid "Ogbah"
|
||||
msgstr ""
|
||||
msgstr "Ogbah"
|
||||
|
||||
#. name for oge
|
||||
msgid "Georgian; Old"
|
||||
@ -19616,7 +19616,7 @@ msgstr ""
|
||||
|
||||
#. name for ogg
|
||||
msgid "Ogbogolo"
|
||||
msgstr ""
|
||||
msgstr "Ogbogolo"
|
||||
|
||||
#. name for ogo
|
||||
msgid "Khana"
|
||||
@ -19624,7 +19624,7 @@ msgstr ""
|
||||
|
||||
#. name for ogu
|
||||
msgid "Ogbronuagum"
|
||||
msgstr ""
|
||||
msgstr "Ogbronuagum"
|
||||
|
||||
#. name for oht
|
||||
msgid "Hittite; Old"
|
||||
@ -19636,27 +19636,27 @@ msgstr "Hongarès; antic"
|
||||
|
||||
#. name for oia
|
||||
msgid "Oirata"
|
||||
msgstr ""
|
||||
msgstr "Oirata"
|
||||
|
||||
#. name for oin
|
||||
msgid "One; Inebu"
|
||||
msgstr ""
|
||||
msgstr "Oneià; Inebu"
|
||||
|
||||
#. name for ojb
|
||||
msgid "Ojibwa; Northwestern"
|
||||
msgstr ""
|
||||
msgstr "Ojibwa; Nordoccidental"
|
||||
|
||||
#. name for ojc
|
||||
msgid "Ojibwa; Central"
|
||||
msgstr ""
|
||||
msgstr "Ojibwa; Central"
|
||||
|
||||
#. name for ojg
|
||||
msgid "Ojibwa; Eastern"
|
||||
msgstr ""
|
||||
msgstr "Ojibwa; Oriental"
|
||||
|
||||
#. name for oji
|
||||
msgid "Ojibwa"
|
||||
msgstr ""
|
||||
msgstr "Ojibwa; Occidental"
|
||||
|
||||
#. name for ojp
|
||||
msgid "Japanese; Old"
|
||||
@ -19664,11 +19664,11 @@ msgstr "Japonès; antic"
|
||||
|
||||
#. name for ojs
|
||||
msgid "Ojibwa; Severn"
|
||||
msgstr ""
|
||||
msgstr "Ojibwa; Severn"
|
||||
|
||||
#. name for ojv
|
||||
msgid "Ontong Java"
|
||||
msgstr ""
|
||||
msgstr "Ontong Java"
|
||||
|
||||
#. name for ojw
|
||||
msgid "Ojibwa; Western"
|
||||
@ -19676,19 +19676,19 @@ msgstr ""
|
||||
|
||||
#. name for oka
|
||||
msgid "Okanagan"
|
||||
msgstr ""
|
||||
msgstr "Colville-Okanagà"
|
||||
|
||||
#. name for okb
|
||||
msgid "Okobo"
|
||||
msgstr ""
|
||||
msgstr "Okobo"
|
||||
|
||||
#. name for okd
|
||||
msgid "Okodia"
|
||||
msgstr ""
|
||||
msgstr "Okodia"
|
||||
|
||||
#. name for oke
|
||||
msgid "Okpe (Southwestern Edo)"
|
||||
msgstr ""
|
||||
msgstr "Okpe"
|
||||
|
||||
#. name for okh
|
||||
msgid "Koresh-e Rostam"
|
||||
@ -19696,15 +19696,15 @@ msgstr ""
|
||||
|
||||
#. name for oki
|
||||
msgid "Okiek"
|
||||
msgstr ""
|
||||
msgstr "Okiek"
|
||||
|
||||
#. name for okj
|
||||
msgid "Oko-Juwoi"
|
||||
msgstr ""
|
||||
msgstr "Oko-Juwoi"
|
||||
|
||||
#. name for okk
|
||||
msgid "One; Kwamtim"
|
||||
msgstr ""
|
||||
msgstr "Oneià; Kwamtim"
|
||||
|
||||
#. name for okl
|
||||
msgid "Kentish Sign Language; Old"
|
||||
@ -19716,7 +19716,7 @@ msgstr ""
|
||||
|
||||
#. name for okn
|
||||
msgid "Oki-No-Erabu"
|
||||
msgstr ""
|
||||
msgstr "Oki-No-Erabu"
|
||||
|
||||
#. name for oko
|
||||
msgid "Korean; Old (3rd-9th cent.)"
|
||||
@ -19728,19 +19728,19 @@ msgstr ""
|
||||
|
||||
#. name for oks
|
||||
msgid "Oko-Eni-Osayen"
|
||||
msgstr ""
|
||||
msgstr "Oko-Eni-Osayen"
|
||||
|
||||
#. name for oku
|
||||
msgid "Oku"
|
||||
msgstr ""
|
||||
msgstr "Oku"
|
||||
|
||||
#. name for okv
|
||||
msgid "Orokaiva"
|
||||
msgstr ""
|
||||
msgstr "Orokaiwa"
|
||||
|
||||
#. name for okx
|
||||
msgid "Okpe (Northwestern Edo)"
|
||||
msgstr ""
|
||||
msgstr "Okpe-Idesa-Akuku; Okpe"
|
||||
|
||||
#. name for ola
|
||||
msgid "Walungge"
|
||||
@ -19752,11 +19752,11 @@ msgstr ""
|
||||
|
||||
#. name for ole
|
||||
msgid "Olekha"
|
||||
msgstr ""
|
||||
msgstr "Olekha"
|
||||
|
||||
#. name for olm
|
||||
msgid "Oloma"
|
||||
msgstr ""
|
||||
msgstr "Oloma"
|
||||
|
||||
#. name for olo
|
||||
msgid "Livvi"
|
||||
@ -19768,7 +19768,7 @@ msgstr ""
|
||||
|
||||
#. name for oma
|
||||
msgid "Omaha-Ponca"
|
||||
msgstr ""
|
||||
msgstr "Omaha-Ponca"
|
||||
|
||||
#. name for omb
|
||||
msgid "Ambae; East"
|
||||
@ -19780,23 +19780,23 @@ msgstr ""
|
||||
|
||||
#. name for ome
|
||||
msgid "Omejes"
|
||||
msgstr ""
|
||||
msgstr "Omejes"
|
||||
|
||||
#. name for omg
|
||||
msgid "Omagua"
|
||||
msgstr ""
|
||||
msgstr "Omagua"
|
||||
|
||||
#. name for omi
|
||||
msgid "Omi"
|
||||
msgstr ""
|
||||
msgstr "Omi"
|
||||
|
||||
#. name for omk
|
||||
msgid "Omok"
|
||||
msgstr ""
|
||||
msgstr "Omok"
|
||||
|
||||
#. name for oml
|
||||
msgid "Ombo"
|
||||
msgstr ""
|
||||
msgstr "Ombo"
|
||||
|
||||
#. name for omn
|
||||
msgid "Minoan"
|
||||
@ -19816,11 +19816,11 @@ msgstr ""
|
||||
|
||||
#. name for omt
|
||||
msgid "Omotik"
|
||||
msgstr ""
|
||||
msgstr "Omotik"
|
||||
|
||||
#. name for omu
|
||||
msgid "Omurano"
|
||||
msgstr ""
|
||||
msgstr "Omurano"
|
||||
|
||||
#. name for omw
|
||||
msgid "Tairora; South"
|
||||
@ -19832,7 +19832,7 @@ msgstr ""
|
||||
|
||||
#. name for ona
|
||||
msgid "Ona"
|
||||
msgstr ""
|
||||
msgstr "Ona"
|
||||
|
||||
#. name for onb
|
||||
msgid "Lingao"
|
||||
@ -19840,31 +19840,31 @@ msgstr ""
|
||||
|
||||
#. name for one
|
||||
msgid "Oneida"
|
||||
msgstr ""
|
||||
msgstr "Oneida"
|
||||
|
||||
#. name for ong
|
||||
msgid "Olo"
|
||||
msgstr ""
|
||||
msgstr "Olo"
|
||||
|
||||
#. name for oni
|
||||
msgid "Onin"
|
||||
msgstr ""
|
||||
msgstr "Onin"
|
||||
|
||||
#. name for onj
|
||||
msgid "Onjob"
|
||||
msgstr ""
|
||||
msgstr "Onjob"
|
||||
|
||||
#. name for onk
|
||||
msgid "One; Kabore"
|
||||
msgstr ""
|
||||
msgstr "Oneià; Kabore"
|
||||
|
||||
#. name for onn
|
||||
msgid "Onobasulu"
|
||||
msgstr ""
|
||||
msgstr "Onobasulu"
|
||||
|
||||
#. name for ono
|
||||
msgid "Onondaga"
|
||||
msgstr ""
|
||||
msgstr "Onondaga"
|
||||
|
||||
#. name for onp
|
||||
msgid "Sartang"
|
||||
@ -19872,15 +19872,15 @@ msgstr ""
|
||||
|
||||
#. name for onr
|
||||
msgid "One; Northern"
|
||||
msgstr ""
|
||||
msgstr "Oneià; Septentrional"
|
||||
|
||||
#. name for ons
|
||||
msgid "Ono"
|
||||
msgstr ""
|
||||
msgstr "Ono"
|
||||
|
||||
#. name for ont
|
||||
msgid "Ontenu"
|
||||
msgstr ""
|
||||
msgstr "Ontenu"
|
||||
|
||||
#. name for onu
|
||||
msgid "Unua"
|
||||
@ -19900,23 +19900,23 @@ msgstr ""
|
||||
|
||||
#. name for oog
|
||||
msgid "Ong"
|
||||
msgstr ""
|
||||
msgstr "Ong"
|
||||
|
||||
#. name for oon
|
||||
msgid "Önge"
|
||||
msgstr ""
|
||||
msgstr "Onge"
|
||||
|
||||
#. name for oor
|
||||
msgid "Oorlams"
|
||||
msgstr ""
|
||||
msgstr "Oorlams"
|
||||
|
||||
#. name for oos
|
||||
msgid "Ossetic; Old"
|
||||
msgstr ""
|
||||
msgstr "Osset"
|
||||
|
||||
#. name for opa
|
||||
msgid "Okpamheri"
|
||||
msgstr ""
|
||||
msgstr "Okpamheri"
|
||||
|
||||
#. name for opk
|
||||
msgid "Kopkaka"
|
||||
@ -19924,39 +19924,39 @@ msgstr ""
|
||||
|
||||
#. name for opm
|
||||
msgid "Oksapmin"
|
||||
msgstr ""
|
||||
msgstr "Oksapmin"
|
||||
|
||||
#. name for opo
|
||||
msgid "Opao"
|
||||
msgstr ""
|
||||
msgstr "Opao"
|
||||
|
||||
#. name for opt
|
||||
msgid "Opata"
|
||||
msgstr ""
|
||||
msgstr "Opata"
|
||||
|
||||
#. name for opy
|
||||
msgid "Ofayé"
|
||||
msgstr ""
|
||||
msgstr "Opaie"
|
||||
|
||||
#. name for ora
|
||||
msgid "Oroha"
|
||||
msgstr ""
|
||||
msgstr "Oroha"
|
||||
|
||||
#. name for orc
|
||||
msgid "Orma"
|
||||
msgstr ""
|
||||
msgstr "Orma"
|
||||
|
||||
#. name for ore
|
||||
msgid "Orejón"
|
||||
msgstr ""
|
||||
msgstr "Orejon"
|
||||
|
||||
#. name for org
|
||||
msgid "Oring"
|
||||
msgstr ""
|
||||
msgstr "Oring"
|
||||
|
||||
#. name for orh
|
||||
msgid "Oroqen"
|
||||
msgstr ""
|
||||
msgstr "Orotxen"
|
||||
|
||||
#. name for ori
|
||||
msgid "Oriya"
|
||||
@ -19968,19 +19968,19 @@ msgstr "Oromo"
|
||||
|
||||
#. name for orn
|
||||
msgid "Orang Kanaq"
|
||||
msgstr ""
|
||||
msgstr "Orang; Kanaq"
|
||||
|
||||
#. name for oro
|
||||
msgid "Orokolo"
|
||||
msgstr ""
|
||||
msgstr "Orocolo"
|
||||
|
||||
#. name for orr
|
||||
msgid "Oruma"
|
||||
msgstr ""
|
||||
msgstr "Oruma"
|
||||
|
||||
#. name for ors
|
||||
msgid "Orang Seletar"
|
||||
msgstr ""
|
||||
msgstr "Orang; Seletar"
|
||||
|
||||
#. name for ort
|
||||
msgid "Oriya; Adivasi"
|
||||
@ -19988,7 +19988,7 @@ msgstr "Oriya; Adivasi"
|
||||
|
||||
#. name for oru
|
||||
msgid "Ormuri"
|
||||
msgstr ""
|
||||
msgstr "Ormuri"
|
||||
|
||||
#. name for orv
|
||||
msgid "Russian; Old"
|
||||
@ -19996,31 +19996,31 @@ msgstr "Rus; antic"
|
||||
|
||||
#. name for orw
|
||||
msgid "Oro Win"
|
||||
msgstr ""
|
||||
msgstr "Oro Win"
|
||||
|
||||
#. name for orx
|
||||
msgid "Oro"
|
||||
msgstr ""
|
||||
msgstr "Oro"
|
||||
|
||||
#. name for orz
|
||||
msgid "Ormu"
|
||||
msgstr ""
|
||||
msgstr "Ormu"
|
||||
|
||||
#. name for osa
|
||||
msgid "Osage"
|
||||
msgstr ""
|
||||
msgstr "Osage"
|
||||
|
||||
#. name for osc
|
||||
msgid "Oscan"
|
||||
msgstr ""
|
||||
msgstr "Osc"
|
||||
|
||||
#. name for osi
|
||||
msgid "Osing"
|
||||
msgstr ""
|
||||
msgstr "Osing"
|
||||
|
||||
#. name for oso
|
||||
msgid "Ososo"
|
||||
msgstr ""
|
||||
msgstr "Ososo"
|
||||
|
||||
#. name for osp
|
||||
msgid "Spanish; Old"
|
||||
@ -20028,15 +20028,15 @@ msgstr "Espanyol; antic"
|
||||
|
||||
#. name for oss
|
||||
msgid "Ossetian"
|
||||
msgstr ""
|
||||
msgstr "Osset"
|
||||
|
||||
#. name for ost
|
||||
msgid "Osatu"
|
||||
msgstr ""
|
||||
msgstr "Osatu"
|
||||
|
||||
#. name for osu
|
||||
msgid "One; Southern"
|
||||
msgstr ""
|
||||
msgstr "One; Meridional"
|
||||
|
||||
#. name for osx
|
||||
msgid "Saxon; Old"
|
||||
@ -20052,15 +20052,15 @@ msgstr ""
|
||||
|
||||
#. name for otd
|
||||
msgid "Ot Danum"
|
||||
msgstr ""
|
||||
msgstr "Dohoi"
|
||||
|
||||
#. name for ote
|
||||
msgid "Otomi; Mezquital"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Mezquital"
|
||||
|
||||
#. name for oti
|
||||
msgid "Oti"
|
||||
msgstr ""
|
||||
msgstr "Oti"
|
||||
|
||||
#. name for otk
|
||||
msgid "Turkish; Old"
|
||||
@ -20068,43 +20068,43 @@ msgstr "Turc; antic"
|
||||
|
||||
#. name for otl
|
||||
msgid "Otomi; Tilapa"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Tilapa"
|
||||
|
||||
#. name for otm
|
||||
msgid "Otomi; Eastern Highland"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Oriental"
|
||||
|
||||
#. name for otn
|
||||
msgid "Otomi; Tenango"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Tenango"
|
||||
|
||||
#. name for otq
|
||||
msgid "Otomi; Querétaro"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Queretaro"
|
||||
|
||||
#. name for otr
|
||||
msgid "Otoro"
|
||||
msgstr ""
|
||||
msgstr "Otoro"
|
||||
|
||||
#. name for ots
|
||||
msgid "Otomi; Estado de México"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Estat de Mèxic"
|
||||
|
||||
#. name for ott
|
||||
msgid "Otomi; Temoaya"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Temoaya"
|
||||
|
||||
#. name for otu
|
||||
msgid "Otuke"
|
||||
msgstr ""
|
||||
msgstr "Otuke"
|
||||
|
||||
#. name for otw
|
||||
msgid "Ottawa"
|
||||
msgstr ""
|
||||
msgstr "Ottawa"
|
||||
|
||||
#. name for otx
|
||||
msgid "Otomi; Texcatepec"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Texcatepec"
|
||||
|
||||
#. name for oty
|
||||
msgid "Tamil; Old"
|
||||
@ -20112,7 +20112,7 @@ msgstr ""
|
||||
|
||||
#. name for otz
|
||||
msgid "Otomi; Ixtenco"
|
||||
msgstr ""
|
||||
msgstr "Otomí; Ixtenc"
|
||||
|
||||
#. name for oua
|
||||
msgid "Tagargrent"
|
||||
@ -20124,7 +20124,7 @@ msgstr ""
|
||||
|
||||
#. name for oue
|
||||
msgid "Oune"
|
||||
msgstr ""
|
||||
msgstr "Oune"
|
||||
|
||||
#. name for oui
|
||||
msgid "Uighur; Old"
|
||||
@ -20132,15 +20132,15 @@ msgstr ""
|
||||
|
||||
#. name for oum
|
||||
msgid "Ouma"
|
||||
msgstr ""
|
||||
msgstr "Ouma"
|
||||
|
||||
#. name for oun
|
||||
msgid "!O!ung"
|
||||
msgstr ""
|
||||
msgstr "Oung"
|
||||
|
||||
#. name for owi
|
||||
msgid "Owiniga"
|
||||
msgstr ""
|
||||
msgstr "Owiniga"
|
||||
|
||||
#. name for owl
|
||||
msgid "Welsh; Old"
|
||||
@ -20148,11 +20148,11 @@ msgstr "Gal·lès; antic"
|
||||
|
||||
#. name for oyb
|
||||
msgid "Oy"
|
||||
msgstr ""
|
||||
msgstr "Oy"
|
||||
|
||||
#. name for oyd
|
||||
msgid "Oyda"
|
||||
msgstr ""
|
||||
msgstr "Oyda"
|
||||
|
||||
#. name for oym
|
||||
msgid "Wayampi"
|
||||
@ -20160,7 +20160,7 @@ msgstr ""
|
||||
|
||||
#. name for oyy
|
||||
msgid "Oya'oya"
|
||||
msgstr ""
|
||||
msgstr "Oya'oya"
|
||||
|
||||
#. name for ozm
|
||||
msgid "Koonzime"
|
||||
@ -20168,27 +20168,27 @@ msgstr ""
|
||||
|
||||
#. name for pab
|
||||
msgid "Parecís"
|
||||
msgstr ""
|
||||
msgstr "Pareci"
|
||||
|
||||
#. name for pac
|
||||
msgid "Pacoh"
|
||||
msgstr ""
|
||||
msgstr "Pacoh"
|
||||
|
||||
#. name for pad
|
||||
msgid "Paumarí"
|
||||
msgstr ""
|
||||
msgstr "Paumarí"
|
||||
|
||||
#. name for pae
|
||||
msgid "Pagibete"
|
||||
msgstr ""
|
||||
msgstr "Pagibete"
|
||||
|
||||
#. name for paf
|
||||
msgid "Paranawát"
|
||||
msgstr ""
|
||||
msgstr "Paranawat"
|
||||
|
||||
#. name for pag
|
||||
msgid "Pangasinan"
|
||||
msgstr ""
|
||||
msgstr "Pangasi"
|
||||
|
||||
#. name for pah
|
||||
msgid "Tenharim"
|
||||
@ -20196,19 +20196,19 @@ msgstr ""
|
||||
|
||||
#. name for pai
|
||||
msgid "Pe"
|
||||
msgstr ""
|
||||
msgstr "Pe"
|
||||
|
||||
#. name for pak
|
||||
msgid "Parakanã"
|
||||
msgstr ""
|
||||
msgstr "Akwawa; Parakanà"
|
||||
|
||||
#. name for pal
|
||||
msgid "Pahlavi"
|
||||
msgstr ""
|
||||
msgstr "Pahlavi"
|
||||
|
||||
#. name for pam
|
||||
msgid "Pampanga"
|
||||
msgstr ""
|
||||
msgstr "Pampangà"
|
||||
|
||||
#. name for pan
|
||||
msgid "Panjabi"
|
||||
@ -20220,63 +20220,63 @@ msgstr ""
|
||||
|
||||
#. name for pap
|
||||
msgid "Papiamento"
|
||||
msgstr ""
|
||||
msgstr "Papiament"
|
||||
|
||||
#. name for paq
|
||||
msgid "Parya"
|
||||
msgstr ""
|
||||
msgstr "Parya"
|
||||
|
||||
#. name for par
|
||||
msgid "Panamint"
|
||||
msgstr ""
|
||||
msgstr "Panamint"
|
||||
|
||||
#. name for pas
|
||||
msgid "Papasena"
|
||||
msgstr ""
|
||||
msgstr "Papasena"
|
||||
|
||||
#. name for pat
|
||||
msgid "Papitalai"
|
||||
msgstr ""
|
||||
msgstr "Papitalai"
|
||||
|
||||
#. name for pau
|
||||
msgid "Palauan"
|
||||
msgstr ""
|
||||
msgstr "Palavà"
|
||||
|
||||
#. name for pav
|
||||
msgid "Pakaásnovos"
|
||||
msgstr ""
|
||||
msgstr "Pakaa Nova"
|
||||
|
||||
#. name for paw
|
||||
msgid "Pawnee"
|
||||
msgstr ""
|
||||
msgstr "Pawnee"
|
||||
|
||||
#. name for pax
|
||||
msgid "Pankararé"
|
||||
msgstr ""
|
||||
msgstr "Pankararé"
|
||||
|
||||
#. name for pay
|
||||
msgid "Pech"
|
||||
msgstr ""
|
||||
msgstr "Pech"
|
||||
|
||||
#. name for paz
|
||||
msgid "Pankararú"
|
||||
msgstr ""
|
||||
msgstr "Pankarurú"
|
||||
|
||||
#. name for pbb
|
||||
msgid "Páez"
|
||||
msgstr ""
|
||||
msgstr "Páez"
|
||||
|
||||
#. name for pbc
|
||||
msgid "Patamona"
|
||||
msgstr ""
|
||||
msgstr "Patamona"
|
||||
|
||||
#. name for pbe
|
||||
msgid "Popoloca; Mezontla"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Mezontla"
|
||||
|
||||
#. name for pbf
|
||||
msgid "Popoloca; Coyotepec"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Coyotepec"
|
||||
|
||||
#. name for pbg
|
||||
msgid "Paraujano"
|
||||
@ -20288,7 +20288,7 @@ msgstr ""
|
||||
|
||||
#. name for pbi
|
||||
msgid "Parkwa"
|
||||
msgstr ""
|
||||
msgstr "Parkwa"
|
||||
|
||||
#. name for pbl
|
||||
msgid "Mak (Nigeria)"
|
||||
@ -20300,7 +20300,7 @@ msgstr ""
|
||||
|
||||
#. name for pbo
|
||||
msgid "Papel"
|
||||
msgstr ""
|
||||
msgstr "Papel"
|
||||
|
||||
#. name for pbp
|
||||
msgid "Badyara"
|
||||
@ -20336,7 +20336,7 @@ msgstr ""
|
||||
|
||||
#. name for pca
|
||||
msgid "Popoloca; Santa Inés Ahuatempan"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Ahuatempan"
|
||||
|
||||
#. name for pcb
|
||||
msgid "Pear"
|
||||
@ -20832,7 +20832,7 @@ msgstr "Senufo; Palaka"
|
||||
|
||||
#. name for pls
|
||||
msgid "Popoloca; San Marcos Tlalcoyalco"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Tlalcoyalc"
|
||||
|
||||
#. name for plt
|
||||
msgid "Malagasy; Plateau"
|
||||
@ -21040,7 +21040,7 @@ msgstr ""
|
||||
|
||||
#. name for poe
|
||||
msgid "Popoloca; San Juan Atzingo"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Atzingo"
|
||||
|
||||
#. name for pof
|
||||
msgid "Poke"
|
||||
@ -21104,7 +21104,7 @@ msgstr ""
|
||||
|
||||
#. name for pow
|
||||
msgid "Popoloca; San Felipe Otlaltepec"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Otlaltepec"
|
||||
|
||||
#. name for pox
|
||||
msgid "Polabian"
|
||||
@ -21160,7 +21160,7 @@ msgstr ""
|
||||
|
||||
#. name for pps
|
||||
msgid "Popoloca; San Luís Temalacayuca"
|
||||
msgstr ""
|
||||
msgstr "Popoloca; Temalacayuca"
|
||||
|
||||
#. name for ppt
|
||||
msgid "Pare"
|
||||
|
@ -9,13 +9,13 @@ msgstr ""
|
||||
"Project-Id-Version: calibre\n"
|
||||
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||
"PO-Revision-Date: 2012-12-24 08:05+0000\n"
|
||||
"Last-Translator: Adolfo Jayme Barrientos <fitoschido@gmail.com>\n"
|
||||
"PO-Revision-Date: 2012-12-28 09:13+0000\n"
|
||||
"Last-Translator: Jellby <Unknown>\n"
|
||||
"Language-Team: Español; Castellano <>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
"X-Launchpad-Export-Date: 2012-12-25 04:46+0000\n"
|
||||
"X-Launchpad-Export-Date: 2012-12-29 05:00+0000\n"
|
||||
"X-Generator: Launchpad (build 16378)\n"
|
||||
|
||||
#. name for aaa
|
||||
@ -9584,7 +9584,7 @@ msgstr "Holikachuk"
|
||||
|
||||
#. name for hoj
|
||||
msgid "Hadothi"
|
||||
msgstr "Hadothi"
|
||||
msgstr "Hadoti"
|
||||
|
||||
#. name for hol
|
||||
msgid "Holu"
|
||||
@ -11796,7 +11796,7 @@ msgstr ""
|
||||
|
||||
#. name for khq
|
||||
msgid "Songhay; Koyra Chiini"
|
||||
msgstr ""
|
||||
msgstr "Songhay koyra chiini"
|
||||
|
||||
#. name for khr
|
||||
msgid "Kharia"
|
||||
|
@ -227,9 +227,22 @@ class GetTranslations(Translations): # {{{
|
||||
ans.append(line.split()[-1])
|
||||
return ans
|
||||
|
||||
def resolve_conflicts(self):
|
||||
conflict = False
|
||||
for line in subprocess.check_output(['bzr', 'status']).splitlines():
|
||||
if line == 'conflicts:':
|
||||
conflict = True
|
||||
break
|
||||
if not conflict:
|
||||
raise Exception('bzr merge failed and no conflicts found')
|
||||
subprocess.check_call(['bzr', 'resolve', '--take-other'])
|
||||
|
||||
def run(self, opts):
|
||||
if not self.modified_translations:
|
||||
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
||||
try:
|
||||
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
||||
except subprocess.CalledProcessError:
|
||||
self.resolve_conflicts()
|
||||
self.check_for_errors()
|
||||
|
||||
if self.modified_translations:
|
||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
__appname__ = u'calibre'
|
||||
numeric_version = (0, 9, 12)
|
||||
numeric_version = (0, 9, 13)
|
||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||
|
||||
|
@ -191,7 +191,7 @@ class ANDROID(USBMS):
|
||||
0x10a9 : { 0x6050 : [0x227] },
|
||||
|
||||
# Prestigio
|
||||
0x2207 : { 0 : [0x222] },
|
||||
0x2207 : { 0 : [0x222], 0x10 : [0x222] },
|
||||
|
||||
}
|
||||
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
||||
|
@ -734,6 +734,7 @@ initlibmtp(void) {
|
||||
// who designs a library without anyway to control/redirect the debugging
|
||||
// output, and hardcoded paths that cannot be changed?
|
||||
int bak, new;
|
||||
fprintf(stdout, "\n"); // This is needed, without it, for some odd reason the code below causes stdout to buffer all output after it is restored, rather than using line buffering, and setlinebuf does not work.
|
||||
fflush(stdout);
|
||||
bak = dup(STDOUT_FILENO);
|
||||
new = open("/dev/null", O_WRONLY);
|
||||
|
@ -8,11 +8,11 @@ __docformat__ = 'restructuredtext en'
|
||||
Convert OEB ebook format to PDF.
|
||||
'''
|
||||
|
||||
import glob
|
||||
import os
|
||||
import glob, os
|
||||
|
||||
from calibre.customize.conversion import OutputFormatPlugin, \
|
||||
OptionRecommendation
|
||||
from calibre.constants import iswindows
|
||||
from calibre.customize.conversion import (OutputFormatPlugin,
|
||||
OptionRecommendation)
|
||||
from calibre.ptempfile import TemporaryDirectory
|
||||
|
||||
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
|
||||
@ -136,8 +136,8 @@ class PDFOutput(OutputFormatPlugin):
|
||||
'''
|
||||
from calibre.ebooks.oeb.base import urlnormalize
|
||||
from calibre.gui2 import must_use_qt
|
||||
from calibre.utils.fonts.utils import get_font_names, remove_embed_restriction
|
||||
from PyQt4.Qt import QFontDatabase, QByteArray
|
||||
from calibre.utils.fonts.utils import remove_embed_restriction
|
||||
from PyQt4.Qt import QFontDatabase, QByteArray, QRawFont, QFont
|
||||
|
||||
# First find all @font-face rules and remove them, adding the embedded
|
||||
# fonts to Qt
|
||||
@ -166,11 +166,13 @@ class PDFOutput(OutputFormatPlugin):
|
||||
except:
|
||||
continue
|
||||
must_use_qt()
|
||||
QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
||||
try:
|
||||
family_name = get_font_names(raw)[0]
|
||||
except:
|
||||
family_name = None
|
||||
fid = QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
||||
family_name = None
|
||||
if fid > -1:
|
||||
try:
|
||||
family_name = unicode(QFontDatabase.applicationFontFamilies(fid)[0])
|
||||
except (IndexError, KeyError):
|
||||
pass
|
||||
if family_name:
|
||||
family_map[icu_lower(font_family)] = family_name
|
||||
|
||||
@ -179,6 +181,7 @@ class PDFOutput(OutputFormatPlugin):
|
||||
|
||||
# Now map the font family name specified in the css to the actual
|
||||
# family name of the embedded font (they may be different in general).
|
||||
font_warnings = set()
|
||||
for item in self.oeb.manifest:
|
||||
if not hasattr(item.data, 'cssRules'): continue
|
||||
for i, rule in enumerate(item.data.cssRules):
|
||||
@ -187,9 +190,28 @@ class PDFOutput(OutputFormatPlugin):
|
||||
if ff is None: continue
|
||||
val = ff.propertyValue
|
||||
for i in xrange(val.length):
|
||||
k = icu_lower(val[i].value)
|
||||
try:
|
||||
k = icu_lower(val[i].value)
|
||||
except (AttributeError, TypeError):
|
||||
val[i].value = k = 'times'
|
||||
if k in family_map:
|
||||
val[i].value = family_map[k]
|
||||
if iswindows:
|
||||
# On windows, Qt uses GDI which does not support OpenType
|
||||
# (CFF) fonts, so we need to nuke references to OpenType
|
||||
# fonts. Note that you could compile QT with configure
|
||||
# -directwrite, but that requires atleast Vista SP2
|
||||
for i in xrange(val.length):
|
||||
family = val[i].value
|
||||
if family:
|
||||
f = QRawFont.fromFont(QFont(family))
|
||||
if len(f.fontTable('head')) == 0:
|
||||
if family not in font_warnings:
|
||||
self.log.warn('Ignoring unsupported font: %s'
|
||||
%family)
|
||||
font_warnings.add(family)
|
||||
# Either a bitmap or (more likely) a CFF font
|
||||
val[i].value = 'times'
|
||||
|
||||
def convert_text(self, oeb_book):
|
||||
from calibre.ebooks.metadata.opf2 import OPF
|
||||
|
@ -41,7 +41,6 @@ def find_custom_fonts(options, logger):
|
||||
if options.serif_family:
|
||||
f = family(options.serif_family)
|
||||
fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
|
||||
print (111111, fonts['serif'])
|
||||
if not fonts['serif']:
|
||||
logger.warn('Unable to find serif family %s'%f)
|
||||
if options.sans_family:
|
||||
|
@ -19,7 +19,7 @@ from calibre.constants import plugins
|
||||
from calibre.ebooks.pdf.render.serialize import (PDFStream, Path)
|
||||
from calibre.ebooks.pdf.render.common import inch, A4, fmtnum
|
||||
from calibre.ebooks.pdf.render.graphics import convert_path, Graphics
|
||||
from calibre.utils.fonts.sfnt.container import Sfnt
|
||||
from calibre.utils.fonts.sfnt.container import Sfnt, UnsupportedFont
|
||||
from calibre.utils.fonts.sfnt.metrics import FontMetrics
|
||||
|
||||
Point = namedtuple('Point', 'x y')
|
||||
@ -224,7 +224,11 @@ class PdfEngine(QPaintEngine):
|
||||
|
||||
def create_sfnt(self, text_item):
|
||||
get_table = partial(self.qt_hack.get_sfnt_table, text_item)
|
||||
ans = Font(Sfnt(get_table))
|
||||
try:
|
||||
ans = Font(Sfnt(get_table))
|
||||
except UnsupportedFont as e:
|
||||
raise UnsupportedFont('The font %s is not a valid sfnt. Error: %s'%(
|
||||
text_item.font().family(), e))
|
||||
glyph_map = self.qt_hack.get_glyph_map(text_item)
|
||||
gm = {}
|
||||
for uc, glyph_id in enumerate(glyph_map):
|
||||
@ -251,18 +255,14 @@ class PdfEngine(QPaintEngine):
|
||||
except (KeyError, ValueError):
|
||||
pass
|
||||
glyphs = []
|
||||
pdf_pos = point
|
||||
first_baseline = None
|
||||
last_x = last_y = 0
|
||||
for i, pos in enumerate(gi.positions):
|
||||
if first_baseline is None:
|
||||
first_baseline = pos.y()
|
||||
glyph_pos = pos
|
||||
delta = glyph_pos - pdf_pos
|
||||
glyphs.append((delta.x(), pos.y()-first_baseline, gi.indices[i]))
|
||||
pdf_pos = glyph_pos
|
||||
x, y = pos.x(), pos.y()
|
||||
glyphs.append((x-last_x, last_y - y, gi.indices[i]))
|
||||
last_x, last_y = x, y
|
||||
|
||||
self.pdf.draw_glyph_run([1, 0, 0, -1, point.x(),
|
||||
point.y()], gi.size, metrics, glyphs)
|
||||
self.pdf.draw_glyph_run([gi.stretch, 0, 0, -1, 0, 0], gi.size, metrics,
|
||||
glyphs)
|
||||
sip.delete(gi)
|
||||
|
||||
@store_error
|
||||
|
@ -176,6 +176,7 @@ class PDFWriter(QObject):
|
||||
p = QPixmap()
|
||||
p.loadFromData(self.cover_data)
|
||||
if not p.isNull():
|
||||
self.doc.init_page()
|
||||
draw_image_page(QRect(0, 0, self.doc.width(), self.doc.height()),
|
||||
self.painter, p,
|
||||
preserve_aspect_ratio=self.opts.preserve_cover_aspect_ratio)
|
||||
|
37
src/calibre/ebooks/pdf/render/gradients.py
Normal file
37
src/calibre/ebooks/pdf/render/gradients.py
Normal file
@ -0,0 +1,37 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
from future_builtins import map
|
||||
|
||||
from PyQt4.Qt import (QPointF)
|
||||
|
||||
from calibre.ebooks.pdf.render.common import Stream
|
||||
|
||||
def generate_linear_gradient_shader(gradient, page_rect, is_transparent=False):
|
||||
pass
|
||||
|
||||
class LinearGradient(Stream):
|
||||
|
||||
def __init__(self, brush, matrix, pixel_page_width, pixel_page_height):
|
||||
is_opaque = brush.isOpaque()
|
||||
gradient = brush.gradient()
|
||||
inv = matrix.inverted()[0]
|
||||
|
||||
page_rect = tuple(map(inv.map, (
|
||||
QPointF(0, 0), QPointF(pixel_page_width, 0), QPointF(0, pixel_page_height),
|
||||
QPointF(pixel_page_width, pixel_page_height))))
|
||||
|
||||
shader = generate_linear_gradient_shader(gradient, page_rect)
|
||||
alpha_shader = None
|
||||
if not is_opaque:
|
||||
alpha_shader = generate_linear_gradient_shader(gradient, page_rect, True)
|
||||
|
||||
shader, alpha_shader
|
||||
|
||||
|
@ -58,7 +58,13 @@ class Links(object):
|
||||
0])})
|
||||
if is_local:
|
||||
path = combined_path if href else path
|
||||
annot['Dest'] = self.anchors[path][frag]
|
||||
try:
|
||||
annot['Dest'] = self.anchors[path][frag]
|
||||
except KeyError:
|
||||
try:
|
||||
annot['Dest'] = self.anchors[path][None]
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
url = href + (('#'+frag) if frag else '')
|
||||
purl = urlparse(url)
|
||||
|
@ -17,18 +17,25 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
||||
QFontEngine *fe = ti.fontEngine;
|
||||
qreal size = ti.fontEngine->fontDef.pixelSize;
|
||||
#ifdef Q_WS_WIN
|
||||
if (ti.fontEngine->type() == QFontEngine::Win) {
|
||||
if (false && ti.fontEngine->type() == QFontEngine::Win) {
|
||||
// This is used in the Qt sourcecode, but it gives incorrect results,
|
||||
// so I have disabled it. I dont understand how it works in qpdf.cpp
|
||||
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
|
||||
// I think this should be tmHeight - tmInternalLeading, but pixelSize
|
||||
// seems to work on windows as well, so leave it as pixelSize
|
||||
size = fe->tm.tmHeight;
|
||||
}
|
||||
#endif
|
||||
int synthesized = ti.fontEngine->synthesized();
|
||||
qreal stretch = synthesized & QFontEngine::SynthesizedStretch ? ti.fontEngine->fontDef.stretch/100. : 1.;
|
||||
|
||||
QVarLengthArray<glyph_t> glyphs;
|
||||
QVarLengthArray<QFixedPoint> positions;
|
||||
QTransform m = QTransform::fromTranslate(p.x(), p.y());
|
||||
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
|
||||
QVector<QPointF> points = QVector<QPointF>(positions.count());
|
||||
for (int i = 0; i < positions.count(); i++) {
|
||||
points[i].setX(positions[i].x.toReal());
|
||||
points[i].setX(positions[i].x.toReal()/stretch);
|
||||
points[i].setY(positions[i].y.toReal());
|
||||
}
|
||||
|
||||
@ -38,10 +45,10 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
||||
|
||||
const quint32 *tag = reinterpret_cast<const quint32 *>("name");
|
||||
|
||||
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, points, indices);
|
||||
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, stretch, points, indices);
|
||||
}
|
||||
|
||||
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), indices(indices) {
|
||||
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), stretch(stretch), indices(indices) {
|
||||
}
|
||||
|
||||
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {
|
||||
|
@ -17,9 +17,10 @@ class GlyphInfo {
|
||||
QByteArray name;
|
||||
QVector<QPointF> positions;
|
||||
qreal size;
|
||||
qreal stretch;
|
||||
QVector<unsigned int> indices;
|
||||
|
||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||
|
||||
private:
|
||||
GlyphInfo(const GlyphInfo&);
|
||||
|
@ -13,9 +13,10 @@ class GlyphInfo {
|
||||
public:
|
||||
QByteArray name;
|
||||
qreal size;
|
||||
qreal stretch;
|
||||
QVector<QPointF> &positions;
|
||||
QVector<unsigned int> indices;
|
||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||
private:
|
||||
GlyphInfo(const GlyphInfo& g);
|
||||
|
||||
|
@ -8,7 +8,6 @@ __copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import os
|
||||
from tempfile import gettempdir
|
||||
|
||||
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF,
|
||||
QApplication, QPainter, Qt, QImage, QLinearGradient,
|
||||
@ -99,12 +98,17 @@ def pen(p, xmax, ymax):
|
||||
p.drawRect(0, xmax/3, xmax/3, xmax/2)
|
||||
|
||||
def text(p, xmax, ymax):
|
||||
p.drawText(QPoint(0, ymax/3), 'Text')
|
||||
f = p.font()
|
||||
f.setPixelSize(24)
|
||||
f.setFamily('Candara')
|
||||
p.setFont(f)
|
||||
p.drawText(QPoint(0, 100),
|
||||
'Test intra glyph spacing ffagain imceo')
|
||||
|
||||
def main():
|
||||
app = QApplication([])
|
||||
app
|
||||
tdir = gettempdir()
|
||||
tdir = os.path.abspath('.')
|
||||
pdf = os.path.join(tdir, 'painter.pdf')
|
||||
func = full
|
||||
dpi = 100
|
||||
|
@ -169,6 +169,10 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
|
||||
self.choose_menu = self.qaction.menu()
|
||||
|
||||
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
|
||||
None, None), attr='action_pick_random')
|
||||
ac.triggered.connect(self.pick_random)
|
||||
|
||||
if not os.environ.get('CALIBRE_OVERRIDE_DATABASE_PATH', None):
|
||||
self.choose_menu.addAction(self.action_choose)
|
||||
|
||||
@ -176,13 +180,11 @@ class ChooseLibraryAction(InterfaceAction):
|
||||
self.quick_menu_action = self.choose_menu.addMenu(self.quick_menu)
|
||||
self.rename_menu = QMenu(_('Rename library'))
|
||||
self.rename_menu_action = self.choose_menu.addMenu(self.rename_menu)
|
||||
self.choose_menu.addAction(ac)
|
||||
self.delete_menu = QMenu(_('Remove library'))
|
||||
self.delete_menu_action = self.choose_menu.addMenu(self.delete_menu)
|
||||
|
||||
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
|
||||
None, None), attr='action_pick_random')
|
||||
ac.triggered.connect(self.pick_random)
|
||||
self.choose_menu.addAction(ac)
|
||||
else:
|
||||
self.choose_menu.addAction(ac)
|
||||
|
||||
self.rename_separator = self.choose_menu.addSeparator()
|
||||
|
||||
|
@ -8,10 +8,10 @@ from functools import partial
|
||||
from PyQt4.Qt import QThread, QObject, Qt, QProgressDialog, pyqtSignal, QTimer
|
||||
|
||||
from calibre.gui2.dialogs.progress import ProgressDialog
|
||||
from calibre.gui2 import (question_dialog, error_dialog, info_dialog, gprefs,
|
||||
from calibre.gui2 import (error_dialog, info_dialog, gprefs,
|
||||
warning_dialog, available_width)
|
||||
from calibre.ebooks.metadata.opf2 import OPF
|
||||
from calibre.ebooks.metadata import MetaInformation, authors_to_string
|
||||
from calibre.ebooks.metadata import MetaInformation
|
||||
from calibre.constants import preferred_encoding, filesystem_encoding, DEBUG
|
||||
from calibre.utils.config import prefs
|
||||
from calibre import prints, force_unicode, as_unicode
|
||||
@ -391,25 +391,10 @@ class Adder(QObject): # {{{
|
||||
if not duplicates:
|
||||
return self.duplicates_processed()
|
||||
self.pd.hide()
|
||||
duplicate_message = []
|
||||
for x in duplicates:
|
||||
duplicate_message.append(_('Already in calibre:'))
|
||||
matching_books = self.db.books_with_same_title(x[0])
|
||||
for book_id in matching_books:
|
||||
aut = [a.replace('|', ',') for a in (self.db.authors(book_id,
|
||||
index_is_id=True) or '').split(',')]
|
||||
duplicate_message.append('\t'+ _('%(title)s by %(author)s')%
|
||||
dict(title=self.db.title(book_id, index_is_id=True),
|
||||
author=authors_to_string(aut)))
|
||||
duplicate_message.append(_('You are trying to add:'))
|
||||
duplicate_message.append('\t'+_('%(title)s by %(author)s')%
|
||||
dict(title=x[0].title,
|
||||
author=x[0].format_field('authors')[1]))
|
||||
duplicate_message.append('')
|
||||
if question_dialog(self._parent, _('Duplicates found!'),
|
||||
_('Books with the same title as the following already '
|
||||
'exist in calibre. Add them anyway?'),
|
||||
'\n'.join(duplicate_message)):
|
||||
from calibre.gui2.dialogs.duplicates import DuplicatesQuestion
|
||||
d = DuplicatesQuestion(self.db, duplicates, self._parent)
|
||||
duplicates = tuple(d.duplicates)
|
||||
if duplicates:
|
||||
pd = QProgressDialog(_('Adding duplicates...'), '', 0, len(duplicates),
|
||||
self._parent)
|
||||
pd.setCancelButton(None)
|
||||
|
118
src/calibre/gui2/dialogs/duplicates.py
Normal file
118
src/calibre/gui2/dialogs/duplicates.py
Normal file
@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
from PyQt4.Qt import (QDialog, QGridLayout, QIcon, QLabel, QTreeWidget,
|
||||
QTreeWidgetItem, Qt, QFont, QDialogButtonBox)
|
||||
|
||||
from calibre.ebooks.metadata import authors_to_string
|
||||
|
||||
class DuplicatesQuestion(QDialog):
|
||||
|
||||
def __init__(self, db, duplicates, parent=None):
|
||||
QDialog.__init__(self, parent)
|
||||
self.l = l = QGridLayout()
|
||||
self.setLayout(l)
|
||||
self.setWindowTitle(_('Duplicates found!'))
|
||||
self.i = i = QIcon(I('dialog_question.png'))
|
||||
self.setWindowIcon(i)
|
||||
|
||||
self.l1 = l1 = QLabel()
|
||||
self.l2 = l2 = QLabel(_(
|
||||
'Books with the same titles as the following already '
|
||||
'exist in calibre. Select which books you want added anyway.'))
|
||||
l2.setWordWrap(True)
|
||||
l1.setPixmap(i.pixmap(128, 128))
|
||||
l.addWidget(l1, 0, 0)
|
||||
l.addWidget(l2, 0, 1)
|
||||
|
||||
self.dup_list = dl = QTreeWidget(self)
|
||||
l.addWidget(dl, 1, 0, 1, 2)
|
||||
dl.setHeaderHidden(True)
|
||||
dl.addTopLevelItems(list(self.process_duplicates(db, duplicates)))
|
||||
dl.expandAll()
|
||||
dl.setIndentation(30)
|
||||
|
||||
self.bb = bb = QDialogButtonBox(QDialogButtonBox.Ok|QDialogButtonBox.Cancel)
|
||||
bb.accepted.connect(self.accept)
|
||||
bb.rejected.connect(self.reject)
|
||||
l.addWidget(bb, 2, 0, 1, 2)
|
||||
self.ab = ab = bb.addButton(_('Select &all'), bb.ActionRole)
|
||||
ab.clicked.connect(self.select_all)
|
||||
self.nb = ab = bb.addButton(_('Select &none'), bb.ActionRole)
|
||||
ab.clicked.connect(self.select_none)
|
||||
|
||||
self.resize(self.sizeHint())
|
||||
self.exec_()
|
||||
|
||||
def select_all(self):
|
||||
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||
x = self.dup_list.topLevelItem(i)
|
||||
x.setCheckState(0, Qt.Checked)
|
||||
|
||||
def select_none(self):
|
||||
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||
x = self.dup_list.topLevelItem(i)
|
||||
x.setCheckState(0, Qt.Unchecked)
|
||||
|
||||
def reject(self):
|
||||
self.select_none()
|
||||
QDialog.reject(self)
|
||||
|
||||
def process_duplicates(self, db, duplicates):
|
||||
ta = _('%(title)s by %(author)s')
|
||||
bf = QFont(self.dup_list.font())
|
||||
bf.setBold(True)
|
||||
itf = QFont(self.dup_list.font())
|
||||
itf.setItalic(True)
|
||||
|
||||
for mi, cover, formats in duplicates:
|
||||
item = QTreeWidgetItem([ta%dict(
|
||||
title=mi.title, author=mi.format_field('authors')[1])] , 0)
|
||||
item.setCheckState(0, Qt.Checked)
|
||||
item.setFlags(Qt.ItemIsEnabled|Qt.ItemIsUserCheckable)
|
||||
item.setData(0, Qt.FontRole, bf)
|
||||
item.setData(0, Qt.UserRole, (mi, cover, formats))
|
||||
matching_books = db.books_with_same_title(mi)
|
||||
|
||||
def add_child(text):
|
||||
c = QTreeWidgetItem([text], 1)
|
||||
c.setFlags(Qt.ItemIsEnabled)
|
||||
item.addChild(c)
|
||||
return c
|
||||
|
||||
add_child(_('Already in calibre:')).setData(0, Qt.FontRole, itf)
|
||||
|
||||
for book_id in matching_books:
|
||||
aut = [a.replace('|', ',') for a in (db.authors(book_id,
|
||||
index_is_id=True) or '').split(',')]
|
||||
add_child(ta%dict(
|
||||
title=db.title(book_id, index_is_id=True),
|
||||
author=authors_to_string(aut)))
|
||||
add_child('')
|
||||
|
||||
yield item
|
||||
|
||||
@property
|
||||
def duplicates(self):
|
||||
for i in xrange(self.dup_list.topLevelItemCount()):
|
||||
x = self.dup_list.topLevelItem(i)
|
||||
if x.checkState(0) == Qt.Checked:
|
||||
yield x.data(0, Qt.UserRole).toPyObject()
|
||||
|
||||
if __name__ == '__main__':
|
||||
from PyQt4.Qt import QApplication
|
||||
from calibre.ebooks.metadata.book.base import Metadata as M
|
||||
from calibre.library import db
|
||||
|
||||
app = QApplication([])
|
||||
db = db()
|
||||
d = DuplicatesQuestion(db, [(M('Life of Pi', ['Yann Martel']), None, None),
|
||||
(M('Heirs of the blade', ['Adrian Tchaikovsky']), None, None)])
|
||||
print (tuple(d.duplicates))
|
||||
|
@ -1,10 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
|
@ -1106,6 +1106,7 @@ class SortKeyGenerator(object):
|
||||
self.library_order = tweaks['title_series_sorting'] == 'library_order'
|
||||
self.data = data
|
||||
self.string_sort_key = sort_key
|
||||
self.lang_idx = field_metadata['languages']['rec_index']
|
||||
|
||||
def __call__(self, record):
|
||||
values = tuple(self.itervals(self.data[record]))
|
||||
@ -1159,7 +1160,12 @@ class SortKeyGenerator(object):
|
||||
val = ('', 1)
|
||||
else:
|
||||
if self.library_order:
|
||||
val = title_sort(val)
|
||||
try:
|
||||
lang = record[self.lang_idx].partition(u',')[0]
|
||||
except (AttributeError, ValueError, KeyError,
|
||||
IndexError, TypeError):
|
||||
lang = None
|
||||
val = title_sort(val, order='library_order', lang=lang)
|
||||
sidx_fm = self.field_metadata[name + '_index']
|
||||
sidx = record[sidx_fm['rec_index']]
|
||||
val = (self.string_sort_key(val), sidx)
|
||||
|
@ -663,9 +663,9 @@ class CatalogBuilder(object):
|
||||
# Hack to force the cataloged leading letter to be
|
||||
# an unadorned character if the accented version sorts before the unaccented
|
||||
exceptions = {
|
||||
u'Ä': u'A',
|
||||
u'Ö': u'O',
|
||||
u'Ü': u'U'
|
||||
u'Ä': u'A',
|
||||
u'Ö': u'O',
|
||||
u'Ü': u'U'
|
||||
}
|
||||
|
||||
if key is not None:
|
||||
@ -3473,7 +3473,7 @@ class CatalogBuilder(object):
|
||||
self.play_order += 1
|
||||
navLabelTag = Tag(ncx_soup, 'navLabel')
|
||||
textTag = Tag(ncx_soup, 'text')
|
||||
if len(authors_by_letter[1]) > 1:
|
||||
if authors_by_letter[1] == self.SYMBOLS:
|
||||
fmt_string = _(u"Authors beginning with %s")
|
||||
else:
|
||||
fmt_string = _(u"Authors beginning with '%s'")
|
||||
@ -4422,12 +4422,12 @@ class CatalogBuilder(object):
|
||||
Generate a legal XHTML anchor from unicode character.
|
||||
|
||||
Args:
|
||||
c (unicode): character
|
||||
c (unicode): character(s)
|
||||
|
||||
Return:
|
||||
(str): legal XHTML anchor string of unicode charactar name
|
||||
(str): legal XHTML anchor string of unicode character name
|
||||
"""
|
||||
fullname = unicodedata.name(unicode(c))
|
||||
fullname = u''.join(unicodedata.name(unicode(cc)) for cc in c)
|
||||
terms = fullname.split()
|
||||
return "_".join(terms)
|
||||
|
||||
|
@ -441,7 +441,11 @@ class BrowseServer(object):
|
||||
cat_len = len(category)
|
||||
if not (len(ucat) > cat_len and ucat.startswith(category+'.')):
|
||||
continue
|
||||
icon = category_icon_map['user:']
|
||||
|
||||
if ucat in self.icon_map:
|
||||
icon = '_'+quote(self.icon_map[ucat])
|
||||
else:
|
||||
icon = category_icon_map['user:']
|
||||
# we have a subcategory. Find any further dots (further subcats)
|
||||
cat_len += 1
|
||||
cat = ucat[cat_len:]
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user