Merge from trunk

This commit is contained in:
Charles Haley 2013-01-07 09:18:06 +01:00
commit 313cd5543b
118 changed files with 25114 additions and 20432 deletions

View File

@ -19,6 +19,57 @@
# new recipes: # new recipes:
# - title: # - title:
- version: 0.9.13
date: 2013-01-04
new features:
- title: "Complete rewrite of the PDF Output engine, to support links and fix various bugs"
type: major
description: "calibre now has a new PDF output engine that supports links in the text. It also fixes various bugs, detailed below. In order to implement support for links and fix these bugs, the engine had to be completely rewritten, so there may be some regressions."
- title: "Show disabled device plugins in Preferences->Ignored Devices"
- title: "Get Books: Fix Smashwords, Google books and B&N stores. Add Nook UK store"
- title: "Allow series numbers lower than -100 for custom series columns."
tickets: [1094475]
- title: "Add mass storage driver for rockhip based android smart phones"
tickets: [1087809]
- title: "Add a clear ratings button to the edit metadata dialog"
bug fixes:
- title: "PDF Output: Fix custom page sizes not working on OS X"
- title: "PDF Output: Fix embedding of many fonts not supported (note that embedding of OpenType fonts with Postscript outlines is still not supported on windows, though it is supported on other operating systems)"
- title: "PDF Output: Fix crashes converting some books to PDF on OS X"
tickets: [1087688]
- title: "HTML Input: Handle entities inside href attributes when following the links in an HTML file."
tickets: [1094203]
- title: "Content server: Fix custom icons not used for sub categories"
tickets: [1095016]
- title: "Force use of non-unicode constants in compiled templates. Fixes a problem with regular expression character classes and probably other things."
- title: "Kobo driver: Do not error out if there are invalid dates in the device database"
tickets: [1094597]
- title: "Content server: Fix for non-unicode hostnames when using mDNS"
tickets: [1094063]
improved recipes:
- Today's Zaman
- The Economist
- Foreign Affairs
- New York Times
- Alternet
- Harper's Magazine
- La Stampa
- version: 0.9.12 - version: 0.9.12
date: 2012-12-28 date: 2012-12-28

View File

@ -672,6 +672,19 @@ There are three possible things I know of, that can cause this:
* The Logitech SetPoint Settings application causes random crashes in * The Logitech SetPoint Settings application causes random crashes in
|app| when it is open. Close it before starting |app|. |app| when it is open. Close it before starting |app|.
If none of the above apply to you, then there is some other program on your
computer that is interfering with |app|. First reboot your computer is safe
mode, to have as few running programs as possible, and see if the crashes still
happen. If they do not, then you know it is some program causing the problem.
The most likely such culprit is a program that modifies other programs'
behavior, such as an antivirus, a device driver, something like RoboForm (an
automatic form filling app) or an assistive technology like Voice Control or a
Screen Reader.
The only way to find the culprit is to eliminate the programs one by one and
see which one is causing the issue. Basically, stop a program, run calibre,
check for crashes. If they still happen, stop another program and repeat.
|app| is not starting on OS X? |app| is not starting on OS X?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -70,18 +70,6 @@ class Economist(BasicNewsRecipe):
return br return br
''' '''
def get_cover_url(self):
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
div = soup.find('div', attrs={'class':lambda x: x and
'print-cover-links' in x})
a = div.find('a', href=True)
url = a.get('href')
if url.startswith('/'):
url = 'http://www.economist.com' + url
soup = self.index_to_soup(url)
div = soup.find('div', attrs={'class':'cover-content'})
img = div.find('img', src=True)
return img.get('src')
def parse_index(self): def parse_index(self):
return self.economist_parse_index() return self.economist_parse_index()
@ -92,7 +80,7 @@ class Economist(BasicNewsRecipe):
if div is not None: if div is not None:
img = div.find('img', src=True) img = div.find('img', src=True)
if img is not None: if img is not None:
self.cover_url = img['src'] self.cover_url = re.sub('thumbnail','full',img['src'])
feeds = OrderedDict() feeds = OrderedDict()
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
x}): x}):

View File

@ -9,7 +9,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import Tag, NavigableString from calibre.ebooks.BeautifulSoup import Tag, NavigableString
from collections import OrderedDict from collections import OrderedDict
import time, re import re
class Economist(BasicNewsRecipe): class Economist(BasicNewsRecipe):
@ -37,7 +37,6 @@ class Economist(BasicNewsRecipe):
padding: 7px 0px 9px; padding: 7px 0px 9px;
} }
''' '''
oldest_article = 7.0 oldest_article = 7.0
remove_tags = [ remove_tags = [
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']), dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
@ -46,7 +45,6 @@ class Economist(BasicNewsRecipe):
{'class': lambda x: x and 'share-links-header' in x}, {'class': lambda x: x and 'share-links-header' in x},
] ]
keep_only_tags = [dict(id='ec-article-body')] keep_only_tags = [dict(id='ec-article-body')]
needs_subscription = False
no_stylesheets = True no_stylesheets = True
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL), preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
lambda x:'</html>')] lambda x:'</html>')]
@ -55,27 +53,25 @@ class Economist(BasicNewsRecipe):
# downloaded with connection reset by peer (104) errors. # downloaded with connection reset by peer (104) errors.
delay = 1 delay = 1
def get_cover_url(self): needs_subscription = False
soup = self.index_to_soup('http://www.economist.com/printedition/covers') '''
div = soup.find('div', attrs={'class':lambda x: x and def get_browser(self):
'print-cover-links' in x}) br = BasicNewsRecipe.get_browser()
a = div.find('a', href=True) if self.username and self.password:
url = a.get('href') br.open('http://www.economist.com/user/login')
if url.startswith('/'): br.select_form(nr=1)
url = 'http://www.economist.com' + url br['name'] = self.username
soup = self.index_to_soup(url) br['pass'] = self.password
div = soup.find('div', attrs={'class':'cover-content'}) res = br.submit()
img = div.find('img', src=True) raw = res.read()
return img.get('src') if '>Log out<' not in raw:
raise ValueError('Failed to login to economist.com. '
'Check your username and password.')
return br
'''
def parse_index(self): def parse_index(self):
try:
return self.economist_parse_index()
except:
raise
self.log.warn(
'Initial attempt to parse index failed, retrying in 30 seconds')
time.sleep(30)
return self.economist_parse_index() return self.economist_parse_index()
def economist_parse_index(self): def economist_parse_index(self):
@ -84,7 +80,7 @@ class Economist(BasicNewsRecipe):
if div is not None: if div is not None:
img = div.find('img', src=True) img = div.find('img', src=True)
if img is not None: if img is not None:
self.cover_url = img['src'] self.cover_url = re.sub('thumbnail','full',img['src'])
feeds = OrderedDict() feeds = OrderedDict()
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
x}): x}):
@ -151,154 +147,3 @@ class Economist(BasicNewsRecipe):
div.insert(2, img) div.insert(2, img)
table.replaceWith(div) table.replaceWith(div)
return soup return soup
'''
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.utils.threadpool import ThreadPool, makeRequests
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
import time, string, re
from datetime import datetime
from lxml import html
class Economist(BasicNewsRecipe):
title = 'The Economist (RSS)'
language = 'en'
__author__ = "Kovid Goyal"
description = ('Global news and current affairs from a European'
' perspective. Best downloaded on Friday mornings (GMT).'
' Much slower than the print edition based version.')
extra_css = '.headline {font-size: x-large;} \n h2 { font-size: small; } \n h1 { font-size: medium; }'
oldest_article = 7.0
cover_url = 'http://media.economist.com/sites/default/files/imagecache/print-cover-thumbnail/print-covers/currentcoverus_large.jpg'
#cover_url = 'http://www.economist.com/images/covers/currentcoverus_large.jpg'
remove_tags = [
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
dict(attrs={'class':['dblClkTrk', 'ec-article-info',
'share_inline_header', 'related-items']}),
{'class': lambda x: x and 'share-links-header' in x},
]
keep_only_tags = [dict(id='ec-article-body')]
no_stylesheets = True
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
lambda x:'</html>')]
def parse_index(self):
from calibre.web.feeds.feedparser import parse
if self.test:
self.oldest_article = 14.0
raw = self.index_to_soup(
'http://feeds.feedburner.com/economist/full_print_edition',
raw=True)
entries = parse(raw).entries
pool = ThreadPool(10)
self.feed_dict = {}
requests = []
for i, item in enumerate(entries):
title = item.get('title', _('Untitled article'))
published = item.date_parsed
if not published:
published = time.gmtime()
utctime = datetime(*published[:6])
delta = datetime.utcnow() - utctime
if delta.days*24*3600 + delta.seconds > 24*3600*self.oldest_article:
self.log.debug('Skipping article %s as it is too old.'%title)
continue
link = item.get('link', None)
description = item.get('description', '')
author = item.get('author', '')
requests.append([i, link, title, description, author, published])
if self.test:
requests = requests[:4]
requests = makeRequests(self.process_eco_feed_article, requests, self.eco_article_found,
self.eco_article_failed)
for r in requests: pool.putRequest(r)
pool.wait()
return self.eco_sort_sections([(t, a) for t, a in
self.feed_dict.items()])
def eco_sort_sections(self, feeds):
if not feeds:
raise ValueError('No new articles found')
order = {
'The World This Week': 1,
'Leaders': 2,
'Letters': 3,
'Briefing': 4,
'Business': 5,
'Finance And Economics': 6,
'Science & Technology': 7,
'Books & Arts': 8,
'International': 9,
'United States': 10,
'Asia': 11,
'Europe': 12,
'The Americas': 13,
'Middle East & Africa': 14,
'Britain': 15,
'Obituary': 16,
}
return sorted(feeds, cmp=lambda x,y:cmp(order.get(x[0], 100),
order.get(y[0], 100)))
def process_eco_feed_article(self, args):
from calibre import browser
i, url, title, description, author, published = args
br = browser()
ret = br.open(url)
raw = ret.read()
url = br.geturl().split('?')[0]+'/print'
root = html.fromstring(raw)
matches = root.xpath('//*[@class = "ec-article-info"]')
feedtitle = 'Miscellaneous'
if matches:
feedtitle = string.capwords(html.tostring(matches[-1], method='text',
encoding=unicode).split('|')[-1].strip())
return (i, feedtitle, url, title, description, author, published)
def eco_article_found(self, req, result):
from calibre.web.feeds import Article
i, feedtitle, link, title, description, author, published = result
self.log('Found print version for article:', title, 'in', feedtitle,
'at', link)
a = Article(i, title, link, author, description, published, '')
article = dict(title=a.title, description=a.text_summary,
date=time.strftime(self.timefmt, a.date), author=a.author, url=a.url)
if feedtitle not in self.feed_dict:
self.feed_dict[feedtitle] = []
self.feed_dict[feedtitle].append(article)
def eco_article_failed(self, req, tb):
self.log.error('Failed to download %s with error:'%req.args[0][2])
self.log.debug(tb)
def eco_find_image_tables(self, soup):
for x in soup.findAll('table', align=['right', 'center']):
if len(x.findAll('font')) in (1,2) and len(x.findAll('img')) == 1:
yield x
def postprocess_html(self, soup, first):
body = soup.find('body')
for name, val in body.attrs:
del body[name]
for table in list(self.eco_find_image_tables(soup)):
caption = table.find('font')
img = table.find('img')
div = Tag(soup, 'div')
div['style'] = 'text-align:left;font-size:70%'
ns = NavigableString(self.tag_to_string(caption))
div.insert(0, ns)
div.insert(1, Tag(soup, 'br'))
img.extract()
del img['width']
del img['height']
div.insert(2, img)
table.replaceWith(div)
return soup
'''

118
recipes/el_diplo.recipe Normal file
View File

@ -0,0 +1,118 @@
# Copyright 2013 Tomás Di Domenico
#
# This is a news fetching recipe for the Calibre ebook software, for
# fetching the Cono Sur edition of Le Monde Diplomatique (www.eldiplo.org).
#
# This recipe is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this recipe. If not, see <http://www.gnu.org/licenses/>.
import re
from contextlib import closing
from calibre.web.feeds.recipes import BasicNewsRecipe
from calibre.ptempfile import PersistentTemporaryFile
from calibre.utils.magick import Image
class ElDiplo_Recipe(BasicNewsRecipe):
title = u'El Diplo'
__author__ = 'Tomas Di Domenico'
description = 'Publicacion mensual de Le Monde Diplomatique, edicion Argentina'
langauge = 'es_AR'
needs_subscription = True
auto_cleanup = True
def get_cover(self,url):
tmp_cover = PersistentTemporaryFile(suffix = ".jpg", prefix = "eldiplo_")
self.cover_url = tmp_cover.name
with closing(self.browser.open(url)) as r:
imgdata = r.read()
img = Image()
img.load(imgdata)
img.crop(img.size[0],img.size[1]/2,0,0)
img.save(tmp_cover.name)
def get_browser(self):
br = BasicNewsRecipe.get_browser()
if self.username is not None and self.password is not None:
br.open('http://www.eldiplo.org/index.php/login/-/do_login/index.html')
br.select_form(nr=3)
br['uName'] = self.username
br['uPassword'] = self.password
br.submit()
self.browser = br
return br
def parse_index(self):
default_sect = 'General'
articles = {default_sect:[]}
ans = [default_sect]
sectionsmarker = 'DOSSIER_TITLE: '
sectionsre = re.compile('^'+sectionsmarker)
soup = self.index_to_soup('http://www.eldiplo.org/index.php')
coverdivs = soup.findAll(True,attrs={'id':['lmd-foto']})
a = coverdivs[0].find('a', href=True)
coverurl = a['href'].split("?imagen=")[1]
self.get_cover(coverurl)
thedivs = soup.findAll(True,attrs={'class':['lmd-leermas']})
for div in thedivs:
a = div.find('a', href=True)
if 'Sumario completo' in self.tag_to_string(a, use_alt=True):
summaryurl = re.sub(r'\?.*', '', a['href'])
summaryurl = 'http://www.eldiplo.org' + summaryurl
for pagenum in xrange(1,10):
soup = self.index_to_soup('{0}/?cms1_paging_p_b32={1}'.format(summaryurl,pagenum))
thedivs = soup.findAll(True,attrs={'class':['interna']})
if len(thedivs) == 0:
break
for div in thedivs:
section = div.find(True,text=sectionsre).replace(sectionsmarker,'')
if section == '':
section = default_sect
if section not in articles.keys():
articles[section] = []
ans.append(section)
nota = div.find(True,attrs={'class':['lmd-pl-titulo-nota-dossier']})
a = nota.find('a', href=True)
if not a:
continue
url = re.sub(r'\?.*', '', a['href'])
url = 'http://www.eldiplo.org' + url
title = self.tag_to_string(a, use_alt=True).strip()
summary = div.find(True, attrs={'class':'lmd-sumario-descript'}).find('p')
if summary:
description = self.tag_to_string(summary, use_alt=False)
aut = div.find(True, attrs={'class':'lmd-autor-sumario'})
if aut:
auth = self.tag_to_string(aut, use_alt=False).strip()
if not articles.has_key(section):
articles[section] = []
articles[section].append(dict(title=title,author=auth,url=url,date=None,description=description,content=''))
#ans = self.sort_index_by(ans, {'The Front Page':-1, 'Dining In, Dining Out':1, 'Obituaries':2})
ans = [(section, articles[section]) for section in ans if articles.has_key(section)]
return ans

View File

@ -18,7 +18,7 @@ class Fleshbot(BasicNewsRecipe):
encoding = 'utf-8' encoding = 'utf-8'
use_embedded_content = True use_embedded_content = True
language = 'en' language = 'en'
masthead_url = 'http://cache.gawkerassets.com/assets/kotaku.com/img/logo.png' masthead_url = 'http://fbassets.s3.amazonaws.com/images/uploads/2012/01/fleshbot-logo.png'
extra_css = ''' extra_css = '''
body{font-family: "Lucida Grande",Helvetica,Arial,sans-serif} body{font-family: "Lucida Grande",Helvetica,Arial,sans-serif}
img{margin-bottom: 1em} img{margin-bottom: 1em}
@ -31,7 +31,7 @@ class Fleshbot(BasicNewsRecipe):
, 'language' : language , 'language' : language
} }
feeds = [(u'Articles', u'http://feeds.gawker.com/fleshbot/vip?format=xml')] feeds = [(u'Articles', u'http://www.fleshbot.com/feed')]
remove_tags = [ remove_tags = [
{'class': 'feedflare'}, {'class': 'feedflare'},

View File

@ -28,12 +28,15 @@ class IlMessaggero(BasicNewsRecipe):
recursion = 10 recursion = 10
remove_javascript = True remove_javascript = True
extra_css = ' .bianco31lucida{color: black} '
keep_only_tags = [dict(name='h1', attrs={'class':['titoloLettura2','titoloart','bianco31lucida']}),
keep_only_tags = [dict(name='h1', attrs={'class':'titoloLettura2'}), dict(name='h2', attrs={'class':['sottotitLettura','grigio16']}),
dict(name='h2', attrs={'class':'sottotitLettura'}), dict(name='span', attrs={'class':'testoArticoloG'}),
dict(name='span', attrs={'class':'testoArticoloG'}) dict(name='div', attrs={'id':'testodim'})
] ]
def get_cover_url(self): def get_cover_url(self):
cover = None cover = None
st = time.localtime() st = time.localtime()
@ -55,17 +58,16 @@ class IlMessaggero(BasicNewsRecipe):
feeds = [ feeds = [
(u'HomePage', u'http://www.ilmessaggero.it/rss/home.xml'), (u'HomePage', u'http://www.ilmessaggero.it/rss/home.xml'),
(u'Primo Piano', u'http://www.ilmessaggero.it/rss/initalia_primopiano.xml'), (u'Primo Piano', u'http://www.ilmessaggero.it/rss/initalia_primopiano.xml'),
(u'Cronaca Bianca', u'http://www.ilmessaggero.it/rss/initalia_cronacabianca.xml'),
(u'Cronaca Nera', u'http://www.ilmessaggero.it/rss/initalia_cronacanera.xml'),
(u'Economia e Finanza', u'http://www.ilmessaggero.it/rss/economia.xml'), (u'Economia e Finanza', u'http://www.ilmessaggero.it/rss/economia.xml'),
(u'Politica', u'http://www.ilmessaggero.it/rss/initalia_politica.xml'), (u'Politica', u'http://www.ilmessaggero.it/rss/initalia_politica.xml'),
(u'Scienza e Tecnologia', u'http://www.ilmessaggero.it/rss/scienza.xml'), (u'Cultura', u'http://www.ilmessaggero.it/rss/cultura.xml'),
(u'Cinema', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'), (u'Tecnologia', u'http://www.ilmessaggero.it/rss/tecnologia.xml'),
(u'Viaggi', u'http://www.ilmessaggero.it/rss.php?refresh_ce#'), (u'Spettacoli', u'http://www.ilmessaggero.it/rss/spettacoli.xml'),
(u'Edizioni Locali', u'http://www.ilmessaggero.it/rss/edlocali.xml'),
(u'Roma', u'http://www.ilmessaggero.it/rss/roma.xml'), (u'Roma', u'http://www.ilmessaggero.it/rss/roma.xml'),
(u'Cultura e Tendenze', u'http://www.ilmessaggero.it/rss/roma_culturaspet.xml'), (u'Benessere', u'http://www.ilmessaggero.it/rss/benessere.xml'),
(u'Sport', u'http://www.ilmessaggero.it/rss/sport.xml'), (u'Sport', u'http://www.ilmessaggero.it/rss/sport.xml'),
(u'Calcio', u'http://www.ilmessaggero.it/rss/sport_calcio.xml'), (u'Moda', u'http://www.ilmessaggero.it/rss/moda.xml')
(u'Motori', u'http://www.ilmessaggero.it/rss/sport_motori.xml')
] ]

View File

@ -14,7 +14,8 @@ class LiberoNews(BasicNewsRecipe):
__author__ = 'Marini Gabriele' __author__ = 'Marini Gabriele'
description = 'Italian daily newspaper' description = 'Italian daily newspaper'
cover_url = 'http://www.libero-news.it/images/logo.png' #cover_url = 'http://www.liberoquotidiano.it/images/Libero%20Quotidiano.jpg'
cover_url = 'http://www.edicola.liberoquotidiano.it/vnlibero/fpcut.jsp?testata=milano'
title = u'Libero ' title = u'Libero '
publisher = 'EDITORIALE LIBERO s.r.l 2006' publisher = 'EDITORIALE LIBERO s.r.l 2006'
category = 'News, politics, culture, economy, general interest' category = 'News, politics, culture, economy, general interest'
@ -32,10 +33,11 @@ class LiberoNews(BasicNewsRecipe):
remove_javascript = True remove_javascript = True
keep_only_tags = [ keep_only_tags = [
dict(name='div', attrs={'class':'Articolo'}) dict(name='div', attrs={'class':'Articolo'}),
dict(name='article')
] ]
remove_tags = [ remove_tags = [
dict(name='div', attrs={'class':['CommentaFoto','Priva2']}), dict(name='div', attrs={'class':['CommentaFoto','Priva2','login_commenti','box_16']}),
dict(name='div', attrs={'id':['commentigenerale']}) dict(name='div', attrs={'id':['commentigenerale']})
] ]
feeds = [ feeds = [

View File

@ -66,8 +66,9 @@ class NewYorkReviewOfBooks(BasicNewsRecipe):
self.log('Issue date:', date) self.log('Issue date:', date)
# Find TOC # Find TOC
toc = soup.find('ul', attrs={'class':'issue-article-list'}) tocs = soup.findAll('ul', attrs={'class':'issue-article-list'})
articles = [] articles = []
for toc in tocs:
for li in toc.findAll('li'): for li in toc.findAll('li'):
h3 = li.find('h3') h3 = li.find('h3')
title = self.tag_to_string(h3) title = self.tag_to_string(h3)

View File

@ -0,0 +1,22 @@
from calibre.web.feeds.news import BasicNewsRecipe
class HindustanTimes(BasicNewsRecipe):
title = u'Oxford Mail'
language = 'en_GB'
__author__ = 'Krittika Goyal'
oldest_article = 1 #days
max_articles_per_feed = 25
#encoding = 'cp1252'
use_embedded_content = False
no_stylesheets = True
auto_cleanup = True
feeds = [
('News',
'http://www.oxfordmail.co.uk/news/rss/'),
('Sports',
'http://www.oxfordmail.co.uk/sport/rss/'),
]

View File

@ -26,24 +26,28 @@ class TodaysZaman_en(BasicNewsRecipe):
# remove_attributes = ['width','height'] # remove_attributes = ['width','height']
feeds = [ feeds = [
( u'Home', u'http://www.todayszaman.com/rss?sectionId=0'), ( u'Home', u'http://www.todayszaman.com/0.rss'),
( u'News', u'http://www.todayszaman.com/rss?sectionId=100'), ( u'Sports', u'http://www.todayszaman.com/5.rss'),
( u'Business', u'http://www.todayszaman.com/rss?sectionId=105'), ( u'Columnists', u'http://www.todayszaman.com/6.rss'),
( u'Interviews', u'http://www.todayszaman.com/rss?sectionId=8'), ( u'Interviews', u'http://www.todayszaman.com/9.rss'),
( u'Columnists', u'http://www.todayszaman.com/rss?sectionId=6'), ( u'News', u'http://www.todayszaman.com/100.rss'),
( u'Op-Ed', u'http://www.todayszaman.com/rss?sectionId=109'), ( u'National', u'http://www.todayszaman.com/101.rss'),
( u'Arts & Culture', u'http://www.todayszaman.com/rss?sectionId=110'), ( u'Diplomacy', u'http://www.todayszaman.com/102.rss'),
( u'Expat Zone', u'http://www.todayszaman.com/rss?sectionId=132'), ( u'World', u'http://www.todayszaman.com/104.rss'),
( u'Sports', u'http://www.todayszaman.com/rss?sectionId=5'), ( u'Business', u'http://www.todayszaman.com/105.rss'),
( u'Features', u'http://www.todayszaman.com/rss?sectionId=116'), ( u'Op-Ed', u'http://www.todayszaman.com/109.rss'),
( u'Travel', u'http://www.todayszaman.com/rss?sectionId=117'), ( u'Arts & Culture', u'http://www.todayszaman.com/110.rss'),
( u'Leisure', u'http://www.todayszaman.com/rss?sectionId=118'), ( u'Features', u'http://www.todayszaman.com/116.rss'),
( u'Weird But True', u'http://www.todayszaman.com/rss?sectionId=134'), ( u'Travel', u'http://www.todayszaman.com/117.rss'),
( u'Life', u'http://www.todayszaman.com/rss?sectionId=133'), ( u'Food', u'http://www.todayszaman.com/124.rss'),
( u'Health', u'http://www.todayszaman.com/rss?sectionId=126'), ( u'Press Review', u'http://www.todayszaman.com/130.rss'),
( u'Press Review', u'http://www.todayszaman.com/rss?sectionId=130'), ( u'Expat Zone', u'http://www.todayszaman.com/132.rss'),
( u'Todays think tanks', u'http://www.todayszaman.com/rss?sectionId=159'), ( u'Life', u'http://www.todayszaman.com/133.rss'),
( u'Think Tanks', u'http://www.todayszaman.com/159.rss'),
( u'Almanac', u'http://www.todayszaman.com/161.rss'),
( u'Health', u'http://www.todayszaman.com/162.rss'),
( u'Fashion & Beauty', u'http://www.todayszaman.com/163.rss'),
( u'Science & Technology', u'http://www.todayszaman.com/349.rss'),
] ]
#def preprocess_html(self, soup): #def preprocess_html(self, soup):
@ -51,3 +55,4 @@ class TodaysZaman_en(BasicNewsRecipe):
#def print_version(self, url): #there is a probem caused by table format #def print_version(self, url): #there is a probem caused by table format
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?') #return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')

View File

@ -12,13 +12,13 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-" "Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n" "devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2012-12-22 17:18+0000\n" "PO-Revision-Date: 2012-12-31 12:50+0000\n"
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n" "Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
"Language-Team: Catalan <linux@softcatala.org>\n" "Language-Team: Catalan <linux@softcatala.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2012-12-23 04:38+0000\n" "X-Launchpad-Export-Date: 2013-01-01 04:45+0000\n"
"X-Generator: Launchpad (build 16378)\n" "X-Generator: Launchpad (build 16378)\n"
"Language: ca\n" "Language: ca\n"
@ -1744,7 +1744,7 @@ msgstr "Asu (Nigèria)"
#. name for aun #. name for aun
msgid "One; Molmo" msgid "One; Molmo"
msgstr "One; Molmo" msgstr "One; Molmo"
#. name for auo #. name for auo
msgid "Auyokawa" msgid "Auyokawa"
@ -1964,7 +1964,7 @@ msgstr "Leyigha"
#. name for ayk #. name for ayk
msgid "Akuku" msgid "Akuku"
msgstr "Akuku" msgstr "Okpe-Idesa-Akuku; Akuku"
#. name for ayl #. name for ayl
msgid "Arabic; Libyan" msgid "Arabic; Libyan"
@ -9984,7 +9984,7 @@ msgstr "Indri"
#. name for ids #. name for ids
msgid "Idesa" msgid "Idesa"
msgstr "Idesa" msgstr "Okpe-Idesa-Akuku; Idesa"
#. name for idt #. name for idt
msgid "Idaté" msgid "Idaté"
@ -19524,7 +19524,7 @@ msgstr ""
#. name for obi #. name for obi
msgid "Obispeño" msgid "Obispeño"
msgstr "" msgstr "Obispeño"
#. name for obk #. name for obk
msgid "Bontok; Southern" msgid "Bontok; Southern"
@ -19532,7 +19532,7 @@ msgstr "Bontoc; meridional"
#. name for obl #. name for obl
msgid "Oblo" msgid "Oblo"
msgstr "" msgstr "Oblo"
#. name for obm #. name for obm
msgid "Moabite" msgid "Moabite"
@ -19552,11 +19552,11 @@ msgstr "Bretó; antic"
#. name for obu #. name for obu
msgid "Obulom" msgid "Obulom"
msgstr "" msgstr "Obulom"
#. name for oca #. name for oca
msgid "Ocaina" msgid "Ocaina"
msgstr "" msgstr "Ocaina"
#. name for och #. name for och
msgid "Chinese; Old" msgid "Chinese; Old"
@ -19576,11 +19576,11 @@ msgstr "Matlazinca; Atzingo"
#. name for oda #. name for oda
msgid "Odut" msgid "Odut"
msgstr "" msgstr "Odut"
#. name for odk #. name for odk
msgid "Od" msgid "Od"
msgstr "" msgstr "Od"
#. name for odt #. name for odt
msgid "Dutch; Old" msgid "Dutch; Old"
@ -19588,11 +19588,11 @@ msgstr "Holandès; antic"
#. name for odu #. name for odu
msgid "Odual" msgid "Odual"
msgstr "" msgstr "Odual"
#. name for ofo #. name for ofo
msgid "Ofo" msgid "Ofo"
msgstr "" msgstr "Ofo"
#. name for ofs #. name for ofs
msgid "Frisian; Old" msgid "Frisian; Old"
@ -19604,11 +19604,11 @@ msgstr ""
#. name for ogb #. name for ogb
msgid "Ogbia" msgid "Ogbia"
msgstr "" msgstr "Ogbia"
#. name for ogc #. name for ogc
msgid "Ogbah" msgid "Ogbah"
msgstr "" msgstr "Ogbah"
#. name for oge #. name for oge
msgid "Georgian; Old" msgid "Georgian; Old"
@ -19616,7 +19616,7 @@ msgstr ""
#. name for ogg #. name for ogg
msgid "Ogbogolo" msgid "Ogbogolo"
msgstr "" msgstr "Ogbogolo"
#. name for ogo #. name for ogo
msgid "Khana" msgid "Khana"
@ -19624,7 +19624,7 @@ msgstr ""
#. name for ogu #. name for ogu
msgid "Ogbronuagum" msgid "Ogbronuagum"
msgstr "" msgstr "Ogbronuagum"
#. name for oht #. name for oht
msgid "Hittite; Old" msgid "Hittite; Old"
@ -19636,27 +19636,27 @@ msgstr "Hongarès; antic"
#. name for oia #. name for oia
msgid "Oirata" msgid "Oirata"
msgstr "" msgstr "Oirata"
#. name for oin #. name for oin
msgid "One; Inebu" msgid "One; Inebu"
msgstr "" msgstr "Oneià; Inebu"
#. name for ojb #. name for ojb
msgid "Ojibwa; Northwestern" msgid "Ojibwa; Northwestern"
msgstr "" msgstr "Ojibwa; Nordoccidental"
#. name for ojc #. name for ojc
msgid "Ojibwa; Central" msgid "Ojibwa; Central"
msgstr "" msgstr "Ojibwa; Central"
#. name for ojg #. name for ojg
msgid "Ojibwa; Eastern" msgid "Ojibwa; Eastern"
msgstr "" msgstr "Ojibwa; Oriental"
#. name for oji #. name for oji
msgid "Ojibwa" msgid "Ojibwa"
msgstr "" msgstr "Ojibwa; Occidental"
#. name for ojp #. name for ojp
msgid "Japanese; Old" msgid "Japanese; Old"
@ -19664,11 +19664,11 @@ msgstr "Japonès; antic"
#. name for ojs #. name for ojs
msgid "Ojibwa; Severn" msgid "Ojibwa; Severn"
msgstr "" msgstr "Ojibwa; Severn"
#. name for ojv #. name for ojv
msgid "Ontong Java" msgid "Ontong Java"
msgstr "" msgstr "Ontong Java"
#. name for ojw #. name for ojw
msgid "Ojibwa; Western" msgid "Ojibwa; Western"
@ -19676,19 +19676,19 @@ msgstr ""
#. name for oka #. name for oka
msgid "Okanagan" msgid "Okanagan"
msgstr "" msgstr "Colville-Okanagà"
#. name for okb #. name for okb
msgid "Okobo" msgid "Okobo"
msgstr "" msgstr "Okobo"
#. name for okd #. name for okd
msgid "Okodia" msgid "Okodia"
msgstr "" msgstr "Okodia"
#. name for oke #. name for oke
msgid "Okpe (Southwestern Edo)" msgid "Okpe (Southwestern Edo)"
msgstr "" msgstr "Okpe"
#. name for okh #. name for okh
msgid "Koresh-e Rostam" msgid "Koresh-e Rostam"
@ -19696,15 +19696,15 @@ msgstr ""
#. name for oki #. name for oki
msgid "Okiek" msgid "Okiek"
msgstr "" msgstr "Okiek"
#. name for okj #. name for okj
msgid "Oko-Juwoi" msgid "Oko-Juwoi"
msgstr "" msgstr "Oko-Juwoi"
#. name for okk #. name for okk
msgid "One; Kwamtim" msgid "One; Kwamtim"
msgstr "" msgstr "Oneià; Kwamtim"
#. name for okl #. name for okl
msgid "Kentish Sign Language; Old" msgid "Kentish Sign Language; Old"
@ -19716,7 +19716,7 @@ msgstr ""
#. name for okn #. name for okn
msgid "Oki-No-Erabu" msgid "Oki-No-Erabu"
msgstr "" msgstr "Oki-No-Erabu"
#. name for oko #. name for oko
msgid "Korean; Old (3rd-9th cent.)" msgid "Korean; Old (3rd-9th cent.)"
@ -19728,19 +19728,19 @@ msgstr ""
#. name for oks #. name for oks
msgid "Oko-Eni-Osayen" msgid "Oko-Eni-Osayen"
msgstr "" msgstr "Oko-Eni-Osayen"
#. name for oku #. name for oku
msgid "Oku" msgid "Oku"
msgstr "" msgstr "Oku"
#. name for okv #. name for okv
msgid "Orokaiva" msgid "Orokaiva"
msgstr "" msgstr "Orokaiwa"
#. name for okx #. name for okx
msgid "Okpe (Northwestern Edo)" msgid "Okpe (Northwestern Edo)"
msgstr "" msgstr "Okpe-Idesa-Akuku; Okpe"
#. name for ola #. name for ola
msgid "Walungge" msgid "Walungge"
@ -19752,11 +19752,11 @@ msgstr ""
#. name for ole #. name for ole
msgid "Olekha" msgid "Olekha"
msgstr "" msgstr "Olekha"
#. name for olm #. name for olm
msgid "Oloma" msgid "Oloma"
msgstr "" msgstr "Oloma"
#. name for olo #. name for olo
msgid "Livvi" msgid "Livvi"
@ -19768,7 +19768,7 @@ msgstr ""
#. name for oma #. name for oma
msgid "Omaha-Ponca" msgid "Omaha-Ponca"
msgstr "" msgstr "Omaha-Ponca"
#. name for omb #. name for omb
msgid "Ambae; East" msgid "Ambae; East"
@ -19780,23 +19780,23 @@ msgstr ""
#. name for ome #. name for ome
msgid "Omejes" msgid "Omejes"
msgstr "" msgstr "Omejes"
#. name for omg #. name for omg
msgid "Omagua" msgid "Omagua"
msgstr "" msgstr "Omagua"
#. name for omi #. name for omi
msgid "Omi" msgid "Omi"
msgstr "" msgstr "Omi"
#. name for omk #. name for omk
msgid "Omok" msgid "Omok"
msgstr "" msgstr "Omok"
#. name for oml #. name for oml
msgid "Ombo" msgid "Ombo"
msgstr "" msgstr "Ombo"
#. name for omn #. name for omn
msgid "Minoan" msgid "Minoan"
@ -19816,11 +19816,11 @@ msgstr ""
#. name for omt #. name for omt
msgid "Omotik" msgid "Omotik"
msgstr "" msgstr "Omotik"
#. name for omu #. name for omu
msgid "Omurano" msgid "Omurano"
msgstr "" msgstr "Omurano"
#. name for omw #. name for omw
msgid "Tairora; South" msgid "Tairora; South"
@ -19832,7 +19832,7 @@ msgstr ""
#. name for ona #. name for ona
msgid "Ona" msgid "Ona"
msgstr "" msgstr "Ona"
#. name for onb #. name for onb
msgid "Lingao" msgid "Lingao"
@ -19840,31 +19840,31 @@ msgstr ""
#. name for one #. name for one
msgid "Oneida" msgid "Oneida"
msgstr "" msgstr "Oneida"
#. name for ong #. name for ong
msgid "Olo" msgid "Olo"
msgstr "" msgstr "Olo"
#. name for oni #. name for oni
msgid "Onin" msgid "Onin"
msgstr "" msgstr "Onin"
#. name for onj #. name for onj
msgid "Onjob" msgid "Onjob"
msgstr "" msgstr "Onjob"
#. name for onk #. name for onk
msgid "One; Kabore" msgid "One; Kabore"
msgstr "" msgstr "Oneià; Kabore"
#. name for onn #. name for onn
msgid "Onobasulu" msgid "Onobasulu"
msgstr "" msgstr "Onobasulu"
#. name for ono #. name for ono
msgid "Onondaga" msgid "Onondaga"
msgstr "" msgstr "Onondaga"
#. name for onp #. name for onp
msgid "Sartang" msgid "Sartang"
@ -19872,15 +19872,15 @@ msgstr ""
#. name for onr #. name for onr
msgid "One; Northern" msgid "One; Northern"
msgstr "" msgstr "Oneià; Septentrional"
#. name for ons #. name for ons
msgid "Ono" msgid "Ono"
msgstr "" msgstr "Ono"
#. name for ont #. name for ont
msgid "Ontenu" msgid "Ontenu"
msgstr "" msgstr "Ontenu"
#. name for onu #. name for onu
msgid "Unua" msgid "Unua"
@ -19900,23 +19900,23 @@ msgstr ""
#. name for oog #. name for oog
msgid "Ong" msgid "Ong"
msgstr "" msgstr "Ong"
#. name for oon #. name for oon
msgid "Önge" msgid "Önge"
msgstr "" msgstr "Onge"
#. name for oor #. name for oor
msgid "Oorlams" msgid "Oorlams"
msgstr "" msgstr "Oorlams"
#. name for oos #. name for oos
msgid "Ossetic; Old" msgid "Ossetic; Old"
msgstr "" msgstr "Osset"
#. name for opa #. name for opa
msgid "Okpamheri" msgid "Okpamheri"
msgstr "" msgstr "Okpamheri"
#. name for opk #. name for opk
msgid "Kopkaka" msgid "Kopkaka"
@ -19924,39 +19924,39 @@ msgstr ""
#. name for opm #. name for opm
msgid "Oksapmin" msgid "Oksapmin"
msgstr "" msgstr "Oksapmin"
#. name for opo #. name for opo
msgid "Opao" msgid "Opao"
msgstr "" msgstr "Opao"
#. name for opt #. name for opt
msgid "Opata" msgid "Opata"
msgstr "" msgstr "Opata"
#. name for opy #. name for opy
msgid "Ofayé" msgid "Ofayé"
msgstr "" msgstr "Opaie"
#. name for ora #. name for ora
msgid "Oroha" msgid "Oroha"
msgstr "" msgstr "Oroha"
#. name for orc #. name for orc
msgid "Orma" msgid "Orma"
msgstr "" msgstr "Orma"
#. name for ore #. name for ore
msgid "Orejón" msgid "Orejón"
msgstr "" msgstr "Orejon"
#. name for org #. name for org
msgid "Oring" msgid "Oring"
msgstr "" msgstr "Oring"
#. name for orh #. name for orh
msgid "Oroqen" msgid "Oroqen"
msgstr "" msgstr "Orotxen"
#. name for ori #. name for ori
msgid "Oriya" msgid "Oriya"
@ -19968,19 +19968,19 @@ msgstr "Oromo"
#. name for orn #. name for orn
msgid "Orang Kanaq" msgid "Orang Kanaq"
msgstr "" msgstr "Orang; Kanaq"
#. name for oro #. name for oro
msgid "Orokolo" msgid "Orokolo"
msgstr "" msgstr "Orocolo"
#. name for orr #. name for orr
msgid "Oruma" msgid "Oruma"
msgstr "" msgstr "Oruma"
#. name for ors #. name for ors
msgid "Orang Seletar" msgid "Orang Seletar"
msgstr "" msgstr "Orang; Seletar"
#. name for ort #. name for ort
msgid "Oriya; Adivasi" msgid "Oriya; Adivasi"
@ -19988,7 +19988,7 @@ msgstr "Oriya; Adivasi"
#. name for oru #. name for oru
msgid "Ormuri" msgid "Ormuri"
msgstr "" msgstr "Ormuri"
#. name for orv #. name for orv
msgid "Russian; Old" msgid "Russian; Old"
@ -19996,31 +19996,31 @@ msgstr "Rus; antic"
#. name for orw #. name for orw
msgid "Oro Win" msgid "Oro Win"
msgstr "" msgstr "Oro Win"
#. name for orx #. name for orx
msgid "Oro" msgid "Oro"
msgstr "" msgstr "Oro"
#. name for orz #. name for orz
msgid "Ormu" msgid "Ormu"
msgstr "" msgstr "Ormu"
#. name for osa #. name for osa
msgid "Osage" msgid "Osage"
msgstr "" msgstr "Osage"
#. name for osc #. name for osc
msgid "Oscan" msgid "Oscan"
msgstr "" msgstr "Osc"
#. name for osi #. name for osi
msgid "Osing" msgid "Osing"
msgstr "" msgstr "Osing"
#. name for oso #. name for oso
msgid "Ososo" msgid "Ososo"
msgstr "" msgstr "Ososo"
#. name for osp #. name for osp
msgid "Spanish; Old" msgid "Spanish; Old"
@ -20028,15 +20028,15 @@ msgstr "Espanyol; antic"
#. name for oss #. name for oss
msgid "Ossetian" msgid "Ossetian"
msgstr "" msgstr "Osset"
#. name for ost #. name for ost
msgid "Osatu" msgid "Osatu"
msgstr "" msgstr "Osatu"
#. name for osu #. name for osu
msgid "One; Southern" msgid "One; Southern"
msgstr "" msgstr "One; Meridional"
#. name for osx #. name for osx
msgid "Saxon; Old" msgid "Saxon; Old"
@ -20052,15 +20052,15 @@ msgstr ""
#. name for otd #. name for otd
msgid "Ot Danum" msgid "Ot Danum"
msgstr "" msgstr "Dohoi"
#. name for ote #. name for ote
msgid "Otomi; Mezquital" msgid "Otomi; Mezquital"
msgstr "" msgstr "Otomí; Mezquital"
#. name for oti #. name for oti
msgid "Oti" msgid "Oti"
msgstr "" msgstr "Oti"
#. name for otk #. name for otk
msgid "Turkish; Old" msgid "Turkish; Old"
@ -20068,43 +20068,43 @@ msgstr "Turc; antic"
#. name for otl #. name for otl
msgid "Otomi; Tilapa" msgid "Otomi; Tilapa"
msgstr "" msgstr "Otomí; Tilapa"
#. name for otm #. name for otm
msgid "Otomi; Eastern Highland" msgid "Otomi; Eastern Highland"
msgstr "" msgstr "Otomí; Oriental"
#. name for otn #. name for otn
msgid "Otomi; Tenango" msgid "Otomi; Tenango"
msgstr "" msgstr "Otomí; Tenango"
#. name for otq #. name for otq
msgid "Otomi; Querétaro" msgid "Otomi; Querétaro"
msgstr "" msgstr "Otomí; Queretaro"
#. name for otr #. name for otr
msgid "Otoro" msgid "Otoro"
msgstr "" msgstr "Otoro"
#. name for ots #. name for ots
msgid "Otomi; Estado de México" msgid "Otomi; Estado de México"
msgstr "" msgstr "Otomí; Estat de Mèxic"
#. name for ott #. name for ott
msgid "Otomi; Temoaya" msgid "Otomi; Temoaya"
msgstr "" msgstr "Otomí; Temoaya"
#. name for otu #. name for otu
msgid "Otuke" msgid "Otuke"
msgstr "" msgstr "Otuke"
#. name for otw #. name for otw
msgid "Ottawa" msgid "Ottawa"
msgstr "" msgstr "Ottawa"
#. name for otx #. name for otx
msgid "Otomi; Texcatepec" msgid "Otomi; Texcatepec"
msgstr "" msgstr "Otomí; Texcatepec"
#. name for oty #. name for oty
msgid "Tamil; Old" msgid "Tamil; Old"
@ -20112,7 +20112,7 @@ msgstr ""
#. name for otz #. name for otz
msgid "Otomi; Ixtenco" msgid "Otomi; Ixtenco"
msgstr "" msgstr "Otomí; Ixtenc"
#. name for oua #. name for oua
msgid "Tagargrent" msgid "Tagargrent"
@ -20124,7 +20124,7 @@ msgstr ""
#. name for oue #. name for oue
msgid "Oune" msgid "Oune"
msgstr "" msgstr "Oune"
#. name for oui #. name for oui
msgid "Uighur; Old" msgid "Uighur; Old"
@ -20132,15 +20132,15 @@ msgstr ""
#. name for oum #. name for oum
msgid "Ouma" msgid "Ouma"
msgstr "" msgstr "Ouma"
#. name for oun #. name for oun
msgid "!O!ung" msgid "!O!ung"
msgstr "" msgstr "Oung"
#. name for owi #. name for owi
msgid "Owiniga" msgid "Owiniga"
msgstr "" msgstr "Owiniga"
#. name for owl #. name for owl
msgid "Welsh; Old" msgid "Welsh; Old"
@ -20148,11 +20148,11 @@ msgstr "Gal·lès; antic"
#. name for oyb #. name for oyb
msgid "Oy" msgid "Oy"
msgstr "" msgstr "Oy"
#. name for oyd #. name for oyd
msgid "Oyda" msgid "Oyda"
msgstr "" msgstr "Oyda"
#. name for oym #. name for oym
msgid "Wayampi" msgid "Wayampi"
@ -20160,7 +20160,7 @@ msgstr ""
#. name for oyy #. name for oyy
msgid "Oya'oya" msgid "Oya'oya"
msgstr "" msgstr "Oya'oya"
#. name for ozm #. name for ozm
msgid "Koonzime" msgid "Koonzime"
@ -20168,27 +20168,27 @@ msgstr ""
#. name for pab #. name for pab
msgid "Parecís" msgid "Parecís"
msgstr "" msgstr "Pareci"
#. name for pac #. name for pac
msgid "Pacoh" msgid "Pacoh"
msgstr "" msgstr "Pacoh"
#. name for pad #. name for pad
msgid "Paumarí" msgid "Paumarí"
msgstr "" msgstr "Paumarí"
#. name for pae #. name for pae
msgid "Pagibete" msgid "Pagibete"
msgstr "" msgstr "Pagibete"
#. name for paf #. name for paf
msgid "Paranawát" msgid "Paranawát"
msgstr "" msgstr "Paranawat"
#. name for pag #. name for pag
msgid "Pangasinan" msgid "Pangasinan"
msgstr "" msgstr "Pangasi"
#. name for pah #. name for pah
msgid "Tenharim" msgid "Tenharim"
@ -20196,19 +20196,19 @@ msgstr ""
#. name for pai #. name for pai
msgid "Pe" msgid "Pe"
msgstr "" msgstr "Pe"
#. name for pak #. name for pak
msgid "Parakanã" msgid "Parakanã"
msgstr "" msgstr "Akwawa; Parakanà"
#. name for pal #. name for pal
msgid "Pahlavi" msgid "Pahlavi"
msgstr "" msgstr "Pahlavi"
#. name for pam #. name for pam
msgid "Pampanga" msgid "Pampanga"
msgstr "" msgstr "Pampangà"
#. name for pan #. name for pan
msgid "Panjabi" msgid "Panjabi"
@ -20220,63 +20220,63 @@ msgstr ""
#. name for pap #. name for pap
msgid "Papiamento" msgid "Papiamento"
msgstr "" msgstr "Papiament"
#. name for paq #. name for paq
msgid "Parya" msgid "Parya"
msgstr "" msgstr "Parya"
#. name for par #. name for par
msgid "Panamint" msgid "Panamint"
msgstr "" msgstr "Panamint"
#. name for pas #. name for pas
msgid "Papasena" msgid "Papasena"
msgstr "" msgstr "Papasena"
#. name for pat #. name for pat
msgid "Papitalai" msgid "Papitalai"
msgstr "" msgstr "Papitalai"
#. name for pau #. name for pau
msgid "Palauan" msgid "Palauan"
msgstr "" msgstr "Palavà"
#. name for pav #. name for pav
msgid "Pakaásnovos" msgid "Pakaásnovos"
msgstr "" msgstr "Pakaa Nova"
#. name for paw #. name for paw
msgid "Pawnee" msgid "Pawnee"
msgstr "" msgstr "Pawnee"
#. name for pax #. name for pax
msgid "Pankararé" msgid "Pankararé"
msgstr "" msgstr "Pankararé"
#. name for pay #. name for pay
msgid "Pech" msgid "Pech"
msgstr "" msgstr "Pech"
#. name for paz #. name for paz
msgid "Pankararú" msgid "Pankararú"
msgstr "" msgstr "Pankarurú"
#. name for pbb #. name for pbb
msgid "Páez" msgid "Páez"
msgstr "" msgstr "Páez"
#. name for pbc #. name for pbc
msgid "Patamona" msgid "Patamona"
msgstr "" msgstr "Patamona"
#. name for pbe #. name for pbe
msgid "Popoloca; Mezontla" msgid "Popoloca; Mezontla"
msgstr "" msgstr "Popoloca; Mezontla"
#. name for pbf #. name for pbf
msgid "Popoloca; Coyotepec" msgid "Popoloca; Coyotepec"
msgstr "" msgstr "Popoloca; Coyotepec"
#. name for pbg #. name for pbg
msgid "Paraujano" msgid "Paraujano"
@ -20288,7 +20288,7 @@ msgstr ""
#. name for pbi #. name for pbi
msgid "Parkwa" msgid "Parkwa"
msgstr "" msgstr "Parkwa"
#. name for pbl #. name for pbl
msgid "Mak (Nigeria)" msgid "Mak (Nigeria)"
@ -20300,7 +20300,7 @@ msgstr ""
#. name for pbo #. name for pbo
msgid "Papel" msgid "Papel"
msgstr "" msgstr "Papel"
#. name for pbp #. name for pbp
msgid "Badyara" msgid "Badyara"
@ -20336,7 +20336,7 @@ msgstr ""
#. name for pca #. name for pca
msgid "Popoloca; Santa Inés Ahuatempan" msgid "Popoloca; Santa Inés Ahuatempan"
msgstr "" msgstr "Popoloca; Ahuatempan"
#. name for pcb #. name for pcb
msgid "Pear" msgid "Pear"
@ -20832,7 +20832,7 @@ msgstr "Senufo; Palaka"
#. name for pls #. name for pls
msgid "Popoloca; San Marcos Tlalcoyalco" msgid "Popoloca; San Marcos Tlalcoyalco"
msgstr "" msgstr "Popoloca; Tlalcoyalc"
#. name for plt #. name for plt
msgid "Malagasy; Plateau" msgid "Malagasy; Plateau"
@ -21040,7 +21040,7 @@ msgstr ""
#. name for poe #. name for poe
msgid "Popoloca; San Juan Atzingo" msgid "Popoloca; San Juan Atzingo"
msgstr "" msgstr "Popoloca; Atzingo"
#. name for pof #. name for pof
msgid "Poke" msgid "Poke"
@ -21104,7 +21104,7 @@ msgstr ""
#. name for pow #. name for pow
msgid "Popoloca; San Felipe Otlaltepec" msgid "Popoloca; San Felipe Otlaltepec"
msgstr "" msgstr "Popoloca; Otlaltepec"
#. name for pox #. name for pox
msgid "Polabian" msgid "Polabian"
@ -21160,7 +21160,7 @@ msgstr ""
#. name for pps #. name for pps
msgid "Popoloca; San Luís Temalacayuca" msgid "Popoloca; San Luís Temalacayuca"
msgstr "" msgstr "Popoloca; Temalacayuca"
#. name for ppt #. name for ppt
msgid "Pare" msgid "Pare"

View File

@ -9,13 +9,13 @@ msgstr ""
"Project-Id-Version: calibre\n" "Project-Id-Version: calibre\n"
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n" "Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2012-12-24 08:05+0000\n" "PO-Revision-Date: 2012-12-28 09:13+0000\n"
"Last-Translator: Adolfo Jayme Barrientos <fitoschido@gmail.com>\n" "Last-Translator: Jellby <Unknown>\n"
"Language-Team: Español; Castellano <>\n" "Language-Team: Español; Castellano <>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2012-12-25 04:46+0000\n" "X-Launchpad-Export-Date: 2012-12-29 05:00+0000\n"
"X-Generator: Launchpad (build 16378)\n" "X-Generator: Launchpad (build 16378)\n"
#. name for aaa #. name for aaa
@ -9584,7 +9584,7 @@ msgstr "Holikachuk"
#. name for hoj #. name for hoj
msgid "Hadothi" msgid "Hadothi"
msgstr "Hadothi" msgstr "Hadoti"
#. name for hol #. name for hol
msgid "Holu" msgid "Holu"
@ -11796,7 +11796,7 @@ msgstr ""
#. name for khq #. name for khq
msgid "Songhay; Koyra Chiini" msgid "Songhay; Koyra Chiini"
msgstr "" msgstr "Songhay koyra chiini"
#. name for khr #. name for khr
msgid "Kharia" msgid "Kharia"

View File

@ -227,9 +227,22 @@ class GetTranslations(Translations): # {{{
ans.append(line.split()[-1]) ans.append(line.split()[-1])
return ans return ans
def resolve_conflicts(self):
conflict = False
for line in subprocess.check_output(['bzr', 'status']).splitlines():
if line == 'conflicts:':
conflict = True
break
if not conflict:
raise Exception('bzr merge failed and no conflicts found')
subprocess.check_call(['bzr', 'resolve', '--take-other'])
def run(self, opts): def run(self, opts):
if not self.modified_translations: if not self.modified_translations:
try:
subprocess.check_call(['bzr', 'merge', self.BRANCH]) subprocess.check_call(['bzr', 'merge', self.BRANCH])
except subprocess.CalledProcessError:
self.resolve_conflicts()
self.check_for_errors() self.check_for_errors()
if self.modified_translations: if self.modified_translations:

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net' __copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
__appname__ = u'calibre' __appname__ = u'calibre'
numeric_version = (0, 9, 12) numeric_version = (0, 9, 13)
__version__ = u'.'.join(map(unicode, numeric_version)) __version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>" __author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -191,7 +191,7 @@ class ANDROID(USBMS):
0x10a9 : { 0x6050 : [0x227] }, 0x10a9 : { 0x6050 : [0x227] },
# Prestigio # Prestigio
0x2207 : { 0 : [0x222] }, 0x2207 : { 0 : [0x222], 0x10 : [0x222] },
} }
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books', EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',

View File

@ -734,6 +734,7 @@ initlibmtp(void) {
// who designs a library without anyway to control/redirect the debugging // who designs a library without anyway to control/redirect the debugging
// output, and hardcoded paths that cannot be changed? // output, and hardcoded paths that cannot be changed?
int bak, new; int bak, new;
fprintf(stdout, "\n"); // This is needed, without it, for some odd reason the code below causes stdout to buffer all output after it is restored, rather than using line buffering, and setlinebuf does not work.
fflush(stdout); fflush(stdout);
bak = dup(STDOUT_FILENO); bak = dup(STDOUT_FILENO);
new = open("/dev/null", O_WRONLY); new = open("/dev/null", O_WRONLY);

View File

@ -8,11 +8,11 @@ __docformat__ = 'restructuredtext en'
Convert OEB ebook format to PDF. Convert OEB ebook format to PDF.
''' '''
import glob import glob, os
import os
from calibre.customize.conversion import OutputFormatPlugin, \ from calibre.constants import iswindows
OptionRecommendation from calibre.customize.conversion import (OutputFormatPlugin,
OptionRecommendation)
from calibre.ptempfile import TemporaryDirectory from calibre.ptempfile import TemporaryDirectory
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot', UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
@ -136,8 +136,8 @@ class PDFOutput(OutputFormatPlugin):
''' '''
from calibre.ebooks.oeb.base import urlnormalize from calibre.ebooks.oeb.base import urlnormalize
from calibre.gui2 import must_use_qt from calibre.gui2 import must_use_qt
from calibre.utils.fonts.utils import get_font_names, remove_embed_restriction from calibre.utils.fonts.utils import remove_embed_restriction
from PyQt4.Qt import QFontDatabase, QByteArray from PyQt4.Qt import QFontDatabase, QByteArray, QRawFont, QFont
# First find all @font-face rules and remove them, adding the embedded # First find all @font-face rules and remove them, adding the embedded
# fonts to Qt # fonts to Qt
@ -166,11 +166,13 @@ class PDFOutput(OutputFormatPlugin):
except: except:
continue continue
must_use_qt() must_use_qt()
QFontDatabase.addApplicationFontFromData(QByteArray(raw)) fid = QFontDatabase.addApplicationFontFromData(QByteArray(raw))
try:
family_name = get_font_names(raw)[0]
except:
family_name = None family_name = None
if fid > -1:
try:
family_name = unicode(QFontDatabase.applicationFontFamilies(fid)[0])
except (IndexError, KeyError):
pass
if family_name: if family_name:
family_map[icu_lower(font_family)] = family_name family_map[icu_lower(font_family)] = family_name
@ -179,6 +181,7 @@ class PDFOutput(OutputFormatPlugin):
# Now map the font family name specified in the css to the actual # Now map the font family name specified in the css to the actual
# family name of the embedded font (they may be different in general). # family name of the embedded font (they may be different in general).
font_warnings = set()
for item in self.oeb.manifest: for item in self.oeb.manifest:
if not hasattr(item.data, 'cssRules'): continue if not hasattr(item.data, 'cssRules'): continue
for i, rule in enumerate(item.data.cssRules): for i, rule in enumerate(item.data.cssRules):
@ -187,9 +190,28 @@ class PDFOutput(OutputFormatPlugin):
if ff is None: continue if ff is None: continue
val = ff.propertyValue val = ff.propertyValue
for i in xrange(val.length): for i in xrange(val.length):
try:
k = icu_lower(val[i].value) k = icu_lower(val[i].value)
except (AttributeError, TypeError):
val[i].value = k = 'times'
if k in family_map: if k in family_map:
val[i].value = family_map[k] val[i].value = family_map[k]
if iswindows:
# On windows, Qt uses GDI which does not support OpenType
# (CFF) fonts, so we need to nuke references to OpenType
# fonts. Note that you could compile QT with configure
# -directwrite, but that requires atleast Vista SP2
for i in xrange(val.length):
family = val[i].value
if family:
f = QRawFont.fromFont(QFont(family))
if len(f.fontTable('head')) == 0:
if family not in font_warnings:
self.log.warn('Ignoring unsupported font: %s'
%family)
font_warnings.add(family)
# Either a bitmap or (more likely) a CFF font
val[i].value = 'times'
def convert_text(self, oeb_book): def convert_text(self, oeb_book):
from calibre.ebooks.metadata.opf2 import OPF from calibre.ebooks.metadata.opf2 import OPF

View File

@ -41,7 +41,6 @@ def find_custom_fonts(options, logger):
if options.serif_family: if options.serif_family:
f = family(options.serif_family) f = family(options.serif_family)
fonts['serif'] = font_scanner.legacy_fonts_for_family(f) fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
print (111111, fonts['serif'])
if not fonts['serif']: if not fonts['serif']:
logger.warn('Unable to find serif family %s'%f) logger.warn('Unable to find serif family %s'%f)
if options.sans_family: if options.sans_family:

View File

@ -19,7 +19,7 @@ from calibre.constants import plugins
from calibre.ebooks.pdf.render.serialize import (PDFStream, Path) from calibre.ebooks.pdf.render.serialize import (PDFStream, Path)
from calibre.ebooks.pdf.render.common import inch, A4, fmtnum from calibre.ebooks.pdf.render.common import inch, A4, fmtnum
from calibre.ebooks.pdf.render.graphics import convert_path, Graphics from calibre.ebooks.pdf.render.graphics import convert_path, Graphics
from calibre.utils.fonts.sfnt.container import Sfnt from calibre.utils.fonts.sfnt.container import Sfnt, UnsupportedFont
from calibre.utils.fonts.sfnt.metrics import FontMetrics from calibre.utils.fonts.sfnt.metrics import FontMetrics
Point = namedtuple('Point', 'x y') Point = namedtuple('Point', 'x y')
@ -224,7 +224,11 @@ class PdfEngine(QPaintEngine):
def create_sfnt(self, text_item): def create_sfnt(self, text_item):
get_table = partial(self.qt_hack.get_sfnt_table, text_item) get_table = partial(self.qt_hack.get_sfnt_table, text_item)
try:
ans = Font(Sfnt(get_table)) ans = Font(Sfnt(get_table))
except UnsupportedFont as e:
raise UnsupportedFont('The font %s is not a valid sfnt. Error: %s'%(
text_item.font().family(), e))
glyph_map = self.qt_hack.get_glyph_map(text_item) glyph_map = self.qt_hack.get_glyph_map(text_item)
gm = {} gm = {}
for uc, glyph_id in enumerate(glyph_map): for uc, glyph_id in enumerate(glyph_map):
@ -251,18 +255,14 @@ class PdfEngine(QPaintEngine):
except (KeyError, ValueError): except (KeyError, ValueError):
pass pass
glyphs = [] glyphs = []
pdf_pos = point last_x = last_y = 0
first_baseline = None
for i, pos in enumerate(gi.positions): for i, pos in enumerate(gi.positions):
if first_baseline is None: x, y = pos.x(), pos.y()
first_baseline = pos.y() glyphs.append((x-last_x, last_y - y, gi.indices[i]))
glyph_pos = pos last_x, last_y = x, y
delta = glyph_pos - pdf_pos
glyphs.append((delta.x(), pos.y()-first_baseline, gi.indices[i]))
pdf_pos = glyph_pos
self.pdf.draw_glyph_run([1, 0, 0, -1, point.x(), self.pdf.draw_glyph_run([gi.stretch, 0, 0, -1, 0, 0], gi.size, metrics,
point.y()], gi.size, metrics, glyphs) glyphs)
sip.delete(gi) sip.delete(gi)
@store_error @store_error

View File

@ -176,6 +176,7 @@ class PDFWriter(QObject):
p = QPixmap() p = QPixmap()
p.loadFromData(self.cover_data) p.loadFromData(self.cover_data)
if not p.isNull(): if not p.isNull():
self.doc.init_page()
draw_image_page(QRect(0, 0, self.doc.width(), self.doc.height()), draw_image_page(QRect(0, 0, self.doc.width(), self.doc.height()),
self.painter, p, self.painter, p,
preserve_aspect_ratio=self.opts.preserve_cover_aspect_ratio) preserve_aspect_ratio=self.opts.preserve_cover_aspect_ratio)

View File

@ -0,0 +1,37 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from future_builtins import map
from PyQt4.Qt import (QPointF)
from calibre.ebooks.pdf.render.common import Stream
def generate_linear_gradient_shader(gradient, page_rect, is_transparent=False):
pass
class LinearGradient(Stream):
def __init__(self, brush, matrix, pixel_page_width, pixel_page_height):
is_opaque = brush.isOpaque()
gradient = brush.gradient()
inv = matrix.inverted()[0]
page_rect = tuple(map(inv.map, (
QPointF(0, 0), QPointF(pixel_page_width, 0), QPointF(0, pixel_page_height),
QPointF(pixel_page_width, pixel_page_height))))
shader = generate_linear_gradient_shader(gradient, page_rect)
alpha_shader = None
if not is_opaque:
alpha_shader = generate_linear_gradient_shader(gradient, page_rect, True)
shader, alpha_shader

View File

@ -58,7 +58,13 @@ class Links(object):
0])}) 0])})
if is_local: if is_local:
path = combined_path if href else path path = combined_path if href else path
try:
annot['Dest'] = self.anchors[path][frag] annot['Dest'] = self.anchors[path][frag]
except KeyError:
try:
annot['Dest'] = self.anchors[path][None]
except KeyError:
pass
else: else:
url = href + (('#'+frag) if frag else '') url = href + (('#'+frag) if frag else '')
purl = urlparse(url) purl = urlparse(url)

View File

@ -17,18 +17,25 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
QFontEngine *fe = ti.fontEngine; QFontEngine *fe = ti.fontEngine;
qreal size = ti.fontEngine->fontDef.pixelSize; qreal size = ti.fontEngine->fontDef.pixelSize;
#ifdef Q_WS_WIN #ifdef Q_WS_WIN
if (ti.fontEngine->type() == QFontEngine::Win) { if (false && ti.fontEngine->type() == QFontEngine::Win) {
// This is used in the Qt sourcecode, but it gives incorrect results,
// so I have disabled it. I dont understand how it works in qpdf.cpp
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine); QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
// I think this should be tmHeight - tmInternalLeading, but pixelSize
// seems to work on windows as well, so leave it as pixelSize
size = fe->tm.tmHeight; size = fe->tm.tmHeight;
} }
#endif #endif
int synthesized = ti.fontEngine->synthesized();
qreal stretch = synthesized & QFontEngine::SynthesizedStretch ? ti.fontEngine->fontDef.stretch/100. : 1.;
QVarLengthArray<glyph_t> glyphs; QVarLengthArray<glyph_t> glyphs;
QVarLengthArray<QFixedPoint> positions; QVarLengthArray<QFixedPoint> positions;
QTransform m = QTransform::fromTranslate(p.x(), p.y()); QTransform m = QTransform::fromTranslate(p.x(), p.y());
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions); fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
QVector<QPointF> points = QVector<QPointF>(positions.count()); QVector<QPointF> points = QVector<QPointF>(positions.count());
for (int i = 0; i < positions.count(); i++) { for (int i = 0; i < positions.count(); i++) {
points[i].setX(positions[i].x.toReal()); points[i].setX(positions[i].x.toReal()/stretch);
points[i].setY(positions[i].y.toReal()); points[i].setY(positions[i].y.toReal());
} }
@ -38,10 +45,10 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
const quint32 *tag = reinterpret_cast<const quint32 *>("name"); const quint32 *tag = reinterpret_cast<const quint32 *>("name");
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, points, indices); return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, stretch, points, indices);
} }
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), indices(indices) { GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), stretch(stretch), indices(indices) {
} }
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) { QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {

View File

@ -17,9 +17,10 @@ class GlyphInfo {
QByteArray name; QByteArray name;
QVector<QPointF> positions; QVector<QPointF> positions;
qreal size; qreal size;
qreal stretch;
QVector<unsigned int> indices; QVector<unsigned int> indices;
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices); GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
private: private:
GlyphInfo(const GlyphInfo&); GlyphInfo(const GlyphInfo&);

View File

@ -13,9 +13,10 @@ class GlyphInfo {
public: public:
QByteArray name; QByteArray name;
qreal size; qreal size;
qreal stretch;
QVector<QPointF> &positions; QVector<QPointF> &positions;
QVector<unsigned int> indices; QVector<unsigned int> indices;
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices); GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
private: private:
GlyphInfo(const GlyphInfo& g); GlyphInfo(const GlyphInfo& g);

View File

@ -8,7 +8,6 @@ __copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import os import os
from tempfile import gettempdir
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF, from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF,
QApplication, QPainter, Qt, QImage, QLinearGradient, QApplication, QPainter, Qt, QImage, QLinearGradient,
@ -99,12 +98,17 @@ def pen(p, xmax, ymax):
p.drawRect(0, xmax/3, xmax/3, xmax/2) p.drawRect(0, xmax/3, xmax/3, xmax/2)
def text(p, xmax, ymax): def text(p, xmax, ymax):
p.drawText(QPoint(0, ymax/3), 'Text') f = p.font()
f.setPixelSize(24)
f.setFamily('Candara')
p.setFont(f)
p.drawText(QPoint(0, 100),
'Test intra glyph spacing ffagain imceo')
def main(): def main():
app = QApplication([]) app = QApplication([])
app app
tdir = gettempdir() tdir = os.path.abspath('.')
pdf = os.path.join(tdir, 'painter.pdf') pdf = os.path.join(tdir, 'painter.pdf')
func = full func = full
dpi = 100 dpi = 100

View File

@ -169,6 +169,10 @@ class ChooseLibraryAction(InterfaceAction):
self.choose_menu = self.qaction.menu() self.choose_menu = self.qaction.menu()
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
None, None), attr='action_pick_random')
ac.triggered.connect(self.pick_random)
if not os.environ.get('CALIBRE_OVERRIDE_DATABASE_PATH', None): if not os.environ.get('CALIBRE_OVERRIDE_DATABASE_PATH', None):
self.choose_menu.addAction(self.action_choose) self.choose_menu.addAction(self.action_choose)
@ -176,12 +180,10 @@ class ChooseLibraryAction(InterfaceAction):
self.quick_menu_action = self.choose_menu.addMenu(self.quick_menu) self.quick_menu_action = self.choose_menu.addMenu(self.quick_menu)
self.rename_menu = QMenu(_('Rename library')) self.rename_menu = QMenu(_('Rename library'))
self.rename_menu_action = self.choose_menu.addMenu(self.rename_menu) self.rename_menu_action = self.choose_menu.addMenu(self.rename_menu)
self.choose_menu.addAction(ac)
self.delete_menu = QMenu(_('Remove library')) self.delete_menu = QMenu(_('Remove library'))
self.delete_menu_action = self.choose_menu.addMenu(self.delete_menu) self.delete_menu_action = self.choose_menu.addMenu(self.delete_menu)
else:
ac = self.create_action(spec=(_('Pick a random book'), 'random.png',
None, None), attr='action_pick_random')
ac.triggered.connect(self.pick_random)
self.choose_menu.addAction(ac) self.choose_menu.addAction(ac)
self.rename_separator = self.choose_menu.addSeparator() self.rename_separator = self.choose_menu.addSeparator()

View File

@ -8,10 +8,10 @@ from functools import partial
from PyQt4.Qt import QThread, QObject, Qt, QProgressDialog, pyqtSignal, QTimer from PyQt4.Qt import QThread, QObject, Qt, QProgressDialog, pyqtSignal, QTimer
from calibre.gui2.dialogs.progress import ProgressDialog from calibre.gui2.dialogs.progress import ProgressDialog
from calibre.gui2 import (question_dialog, error_dialog, info_dialog, gprefs, from calibre.gui2 import (error_dialog, info_dialog, gprefs,
warning_dialog, available_width) warning_dialog, available_width)
from calibre.ebooks.metadata.opf2 import OPF from calibre.ebooks.metadata.opf2 import OPF
from calibre.ebooks.metadata import MetaInformation, authors_to_string from calibre.ebooks.metadata import MetaInformation
from calibre.constants import preferred_encoding, filesystem_encoding, DEBUG from calibre.constants import preferred_encoding, filesystem_encoding, DEBUG
from calibre.utils.config import prefs from calibre.utils.config import prefs
from calibre import prints, force_unicode, as_unicode from calibre import prints, force_unicode, as_unicode
@ -391,25 +391,10 @@ class Adder(QObject): # {{{
if not duplicates: if not duplicates:
return self.duplicates_processed() return self.duplicates_processed()
self.pd.hide() self.pd.hide()
duplicate_message = [] from calibre.gui2.dialogs.duplicates import DuplicatesQuestion
for x in duplicates: d = DuplicatesQuestion(self.db, duplicates, self._parent)
duplicate_message.append(_('Already in calibre:')) duplicates = tuple(d.duplicates)
matching_books = self.db.books_with_same_title(x[0]) if duplicates:
for book_id in matching_books:
aut = [a.replace('|', ',') for a in (self.db.authors(book_id,
index_is_id=True) or '').split(',')]
duplicate_message.append('\t'+ _('%(title)s by %(author)s')%
dict(title=self.db.title(book_id, index_is_id=True),
author=authors_to_string(aut)))
duplicate_message.append(_('You are trying to add:'))
duplicate_message.append('\t'+_('%(title)s by %(author)s')%
dict(title=x[0].title,
author=x[0].format_field('authors')[1]))
duplicate_message.append('')
if question_dialog(self._parent, _('Duplicates found!'),
_('Books with the same title as the following already '
'exist in calibre. Add them anyway?'),
'\n'.join(duplicate_message)):
pd = QProgressDialog(_('Adding duplicates...'), '', 0, len(duplicates), pd = QProgressDialog(_('Adding duplicates...'), '', 0, len(duplicates),
self._parent) self._parent)
pd.setCancelButton(None) pd.setCancelButton(None)

View File

@ -0,0 +1,118 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from PyQt4.Qt import (QDialog, QGridLayout, QIcon, QLabel, QTreeWidget,
QTreeWidgetItem, Qt, QFont, QDialogButtonBox)
from calibre.ebooks.metadata import authors_to_string
class DuplicatesQuestion(QDialog):
def __init__(self, db, duplicates, parent=None):
QDialog.__init__(self, parent)
self.l = l = QGridLayout()
self.setLayout(l)
self.setWindowTitle(_('Duplicates found!'))
self.i = i = QIcon(I('dialog_question.png'))
self.setWindowIcon(i)
self.l1 = l1 = QLabel()
self.l2 = l2 = QLabel(_(
'Books with the same titles as the following already '
'exist in calibre. Select which books you want added anyway.'))
l2.setWordWrap(True)
l1.setPixmap(i.pixmap(128, 128))
l.addWidget(l1, 0, 0)
l.addWidget(l2, 0, 1)
self.dup_list = dl = QTreeWidget(self)
l.addWidget(dl, 1, 0, 1, 2)
dl.setHeaderHidden(True)
dl.addTopLevelItems(list(self.process_duplicates(db, duplicates)))
dl.expandAll()
dl.setIndentation(30)
self.bb = bb = QDialogButtonBox(QDialogButtonBox.Ok|QDialogButtonBox.Cancel)
bb.accepted.connect(self.accept)
bb.rejected.connect(self.reject)
l.addWidget(bb, 2, 0, 1, 2)
self.ab = ab = bb.addButton(_('Select &all'), bb.ActionRole)
ab.clicked.connect(self.select_all)
self.nb = ab = bb.addButton(_('Select &none'), bb.ActionRole)
ab.clicked.connect(self.select_none)
self.resize(self.sizeHint())
self.exec_()
def select_all(self):
for i in xrange(self.dup_list.topLevelItemCount()):
x = self.dup_list.topLevelItem(i)
x.setCheckState(0, Qt.Checked)
def select_none(self):
for i in xrange(self.dup_list.topLevelItemCount()):
x = self.dup_list.topLevelItem(i)
x.setCheckState(0, Qt.Unchecked)
def reject(self):
self.select_none()
QDialog.reject(self)
def process_duplicates(self, db, duplicates):
ta = _('%(title)s by %(author)s')
bf = QFont(self.dup_list.font())
bf.setBold(True)
itf = QFont(self.dup_list.font())
itf.setItalic(True)
for mi, cover, formats in duplicates:
item = QTreeWidgetItem([ta%dict(
title=mi.title, author=mi.format_field('authors')[1])] , 0)
item.setCheckState(0, Qt.Checked)
item.setFlags(Qt.ItemIsEnabled|Qt.ItemIsUserCheckable)
item.setData(0, Qt.FontRole, bf)
item.setData(0, Qt.UserRole, (mi, cover, formats))
matching_books = db.books_with_same_title(mi)
def add_child(text):
c = QTreeWidgetItem([text], 1)
c.setFlags(Qt.ItemIsEnabled)
item.addChild(c)
return c
add_child(_('Already in calibre:')).setData(0, Qt.FontRole, itf)
for book_id in matching_books:
aut = [a.replace('|', ',') for a in (db.authors(book_id,
index_is_id=True) or '').split(',')]
add_child(ta%dict(
title=db.title(book_id, index_is_id=True),
author=authors_to_string(aut)))
add_child('')
yield item
@property
def duplicates(self):
for i in xrange(self.dup_list.topLevelItemCount()):
x = self.dup_list.topLevelItem(i)
if x.checkState(0) == Qt.Checked:
yield x.data(0, Qt.UserRole).toPyObject()
if __name__ == '__main__':
from PyQt4.Qt import QApplication
from calibre.ebooks.metadata.book.base import Metadata as M
from calibre.library import db
app = QApplication([])
db = db()
d = DuplicatesQuestion(db, [(M('Life of Pi', ['Yann Martel']), None, None),
(M('Heirs of the blade', ['Adrian Tchaikovsky']), None, None)])
print (tuple(d.duplicates))

View File

@ -1,10 +0,0 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'

View File

@ -1106,6 +1106,7 @@ class SortKeyGenerator(object):
self.library_order = tweaks['title_series_sorting'] == 'library_order' self.library_order = tweaks['title_series_sorting'] == 'library_order'
self.data = data self.data = data
self.string_sort_key = sort_key self.string_sort_key = sort_key
self.lang_idx = field_metadata['languages']['rec_index']
def __call__(self, record): def __call__(self, record):
values = tuple(self.itervals(self.data[record])) values = tuple(self.itervals(self.data[record]))
@ -1159,7 +1160,12 @@ class SortKeyGenerator(object):
val = ('', 1) val = ('', 1)
else: else:
if self.library_order: if self.library_order:
val = title_sort(val) try:
lang = record[self.lang_idx].partition(u',')[0]
except (AttributeError, ValueError, KeyError,
IndexError, TypeError):
lang = None
val = title_sort(val, order='library_order', lang=lang)
sidx_fm = self.field_metadata[name + '_index'] sidx_fm = self.field_metadata[name + '_index']
sidx = record[sidx_fm['rec_index']] sidx = record[sidx_fm['rec_index']]
val = (self.string_sort_key(val), sidx) val = (self.string_sort_key(val), sidx)

View File

@ -3473,7 +3473,7 @@ class CatalogBuilder(object):
self.play_order += 1 self.play_order += 1
navLabelTag = Tag(ncx_soup, 'navLabel') navLabelTag = Tag(ncx_soup, 'navLabel')
textTag = Tag(ncx_soup, 'text') textTag = Tag(ncx_soup, 'text')
if len(authors_by_letter[1]) > 1: if authors_by_letter[1] == self.SYMBOLS:
fmt_string = _(u"Authors beginning with %s") fmt_string = _(u"Authors beginning with %s")
else: else:
fmt_string = _(u"Authors beginning with '%s'") fmt_string = _(u"Authors beginning with '%s'")
@ -4422,12 +4422,12 @@ class CatalogBuilder(object):
Generate a legal XHTML anchor from unicode character. Generate a legal XHTML anchor from unicode character.
Args: Args:
c (unicode): character c (unicode): character(s)
Return: Return:
(str): legal XHTML anchor string of unicode charactar name (str): legal XHTML anchor string of unicode character name
""" """
fullname = unicodedata.name(unicode(c)) fullname = u''.join(unicodedata.name(unicode(cc)) for cc in c)
terms = fullname.split() terms = fullname.split()
return "_".join(terms) return "_".join(terms)

View File

@ -441,6 +441,10 @@ class BrowseServer(object):
cat_len = len(category) cat_len = len(category)
if not (len(ucat) > cat_len and ucat.startswith(category+'.')): if not (len(ucat) > cat_len and ucat.startswith(category+'.')):
continue continue
if ucat in self.icon_map:
icon = '_'+quote(self.icon_map[ucat])
else:
icon = category_icon_map['user:'] icon = category_icon_map['user:']
# we have a subcategory. Find any further dots (further subcats) # we have a subcategory. Find any further dots (further subcats)
cat_len += 1 cat_len += 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More