This commit is contained in:
GRiker 2013-02-22 04:21:31 -07:00
commit cddc942115
111 changed files with 58078 additions and 42930 deletions

View File

@ -19,6 +19,74 @@
# new recipes: # new recipes:
# - title: # - title:
- version: 0.9.20
date: 2013-02-22
new features:
- title: "Book polishing: Add an option to smarten punctuation in the book when polishing"
- title: "Book polishing: Add an option to delete all saved settings to the load saved settings button"
- title: "Book polishing: Remember the last used settings"
- title: "Book polishing: Add a checkbox to enable/disable the detailed polishing report"
- title: "Add a separate tweak in Preferences-Tweaks for saving backups of files when polishing. That way you can have calibre save backups while converting EPUB->EPUB and not while polishing, if you so desire."
- title: "Content server: Allow clicking on the book cover to download it. Useful on small screen devices where clicking the Get button may be difficult"
- title: "Driver for Energy Systems C4 Touch."
tickets: [1127477]
bug fixes:
- title: "E-book viewer: Fix a bug that could cause the back button in the viewer to skip a location"
- title: "When tweaking/polishing an azw3 file that does not have an identified content ToC, do not auto-generate one."
tickets: [1130729]
- title: "Book polishing: Use the actual cover image dimensions when creating the svg wrapper for the cover image."
tickets: [1127273]
- title: "Book polishing: Do not error out on epub files containing an iTunesMetadata.plist file."
tickets: [1127308]
- title: "Book polishing: Fix trying to polish more than 5 books at a time not working"
- title: "Content server: Add workaround for bug in latest release of Google Chrome that causes it to not work with book lists containing some utf-8 characters"
tickets: [1130478]
- title: "E-book viewer: When viewing EPUB files, do not parse html as xhtml even if it has svg tags embedded. This allows malformed XHTML files to still be viewed."
- title: "Bulk metadata edit Search & recplace: Update the sample values when changing the type of identifier to search on"
- title: "Fix recipes with the / character in their names not useable from the command line"
tickets: [1127666]
- title: "News download: Fix regression that broke downloading of images in gif format"
- title: "EPUB/AZW3 Output: When splitting the output html on page breaks, handle page-break-after rules correctly, the pre split point html should contain the full element"
- title: "Fix stdout/stderr redirection temp files not being deleted when restarting calibre from within calibre on windows"
- title: "E-book viewer: When viewing epub files that have their cover marked as non-linear, show the cover at the start of the book instead of the end."
tickets: [1126030]
- title: "EPUB Input: Fix handling of cover references with fragments in the urls"
improved recipes:
- Fronda
- Various Polish news sources
new recipes:
- title: Pravda
author: Darko Miletic
- title: PNN
author: n.kucklaender
- title: Various Polish news sources
author: fenuks
- version: 0.9.19 - version: 0.9.19
date: 2013-02-15 date: 2013-02-15

View File

@ -692,7 +692,7 @@ Post any output you see in a help message on the `Forum <http://www.mobileread.c
|app| freezes/crashes occasionally? |app| freezes/crashes occasionally?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are five possible things I know of, that can cause this: There are several possible things I know of, that can cause this:
* You recently connected an external monitor or TV to your computer. In * You recently connected an external monitor or TV to your computer. In
this case, whenever |app| opens a new window like the edit metadata this case, whenever |app| opens a new window like the edit metadata
@ -700,10 +700,6 @@ There are five possible things I know of, that can cause this:
you dont notice it and so you think |app| has frozen. Disconnect your you dont notice it and so you think |app| has frozen. Disconnect your
second monitor and restart calibre. second monitor and restart calibre.
* You are using a Wacom branded USB mouse. There is an incompatibility between
Wacom mice and the graphics toolkit |app| uses. Try using a non-Wacom
mouse.
* If you use RoboForm, it is known to cause |app| to crash. Add |app| to * If you use RoboForm, it is known to cause |app| to crash. Add |app| to
the blacklist of programs inside RoboForm to fix this. Or uninstall the blacklist of programs inside RoboForm to fix this. Or uninstall
RoboForm. RoboForm.
@ -714,6 +710,17 @@ There are five possible things I know of, that can cause this:
* Constant Guard Protection by Xfinity causes crashes in |app|. You have to * Constant Guard Protection by Xfinity causes crashes in |app|. You have to
manually allow |app| in it or uninstall Constant Guard Protection. manually allow |app| in it or uninstall Constant Guard Protection.
* Spybot - Search & Destroy blocks |app| from accessing its temporary files
breaking viewing and converting of books.
* You are using a Wacom branded USB mouse. There is an incompatibility between
Wacom mice and the graphics toolkit |app| uses. Try using a non-Wacom
mouse.
* On some 64 bit versions of Windows there are security software/settings
that prevent 64-bit |app| from working properly. If you are using the 64-bit
version of |app| try switching to the 32-bit version.
If none of the above apply to you, then there is some other program on your If none of the above apply to you, then there is some other program on your
computer that is interfering with |app|. First reboot your computer in safe computer that is interfering with |app|. First reboot your computer in safe
mode, to have as few running programs as possible, and see if the crashes still mode, to have as few running programs as possible, and see if the crashes still

View File

@ -23,7 +23,6 @@ class Fronda(BasicNewsRecipe):
extra_css = ''' extra_css = '''
h1 {font-size:150%} h1 {font-size:150%}
.body {text-align:left;} .body {text-align:left;}
div.headline {font-weight:bold}
''' '''
earliest_date = date.today() - timedelta(days=oldest_article) earliest_date = date.today() - timedelta(days=oldest_article)
@ -72,7 +71,7 @@ class Fronda(BasicNewsRecipe):
feeds.append((genName, articles[genName])) feeds.append((genName, articles[genName]))
return feeds return feeds
keep_only_tags = [ keep_only_tags = [
dict(name='div', attrs={'class':'yui-g'}) dict(name='div', attrs={'class':'yui-g'})
] ]
@ -84,5 +83,7 @@ class Fronda(BasicNewsRecipe):
dict(name='ul', attrs={'class':'comment-list'}), dict(name='ul', attrs={'class':'comment-list'}),
dict(name='ul', attrs={'class':'category'}), dict(name='ul', attrs={'class':'category'}),
dict(name='p', attrs={'id':'comments-disclaimer'}), dict(name='p', attrs={'id':'comments-disclaimer'}),
dict(name='div', attrs={'style':'text-align: left; margin-bottom: 15px;'}),
dict(name='div', attrs={'style':'text-align: left; margin-top: 15px;'}),
dict(name='div', attrs={'id':'comment-form'}) dict(name='div', attrs={'id':'comment-form'})
] ]

BIN
recipes/icons/pravda_rs.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 B

85
recipes/pravda_rs.recipe Normal file
View File

@ -0,0 +1,85 @@
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
__license__ = 'GPL v3'
__copyright__ = '2013, Darko Miletic <darko.miletic at gmail.com>'
'''
www.pravda.rs
'''
import re
from calibre.web.feeds.recipes import BasicNewsRecipe
class Pravda_rs(BasicNewsRecipe):
title = 'Dnevne novine Pravda'
__author__ = 'Darko Miletic'
description = '24 sata portal vesti iz Srbije'
publisher = 'Dnevne novine Pravda'
category = 'news, politics, entertainment, Serbia'
oldest_article = 2
max_articles_per_feed = 100
no_stylesheets = True
encoding = 'utf-8'
use_embedded_content = False
language = 'sr'
publication_type = 'newspaper'
remove_empty_feeds = True
PREFIX = 'http://www.pravda.rs'
FEEDPR = PREFIX + '/category/'
LANGLAT = '?lng=lat'
FEEDSU = '/feed/' + LANGLAT
INDEX = PREFIX + LANGLAT
masthead_url = 'http://www.pravda.rs/wp-content/uploads/2012/09/logoof.png'
extra_css = """
@font-face {font-family: "serif1";src:url(res:///opt/sony/ebook/FONT/tt0011m_.ttf)}
body{font-family: Georgia,"Times New Roman",Times,serif1,serif;}
img{display: block}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher': publisher
, 'language' : language
}
preprocess_regexps = [(re.compile(u'\u0110'), lambda match: u'\u00D0')]
keep_only_tags = [dict(name='div', attrs={'class':'post'})]
remove_tags = [dict(name='h3')]
remove_tags_after = dict(name='h3')
feeds = [
(u'Politika' , FEEDPR + 'politika/' + FEEDSU),
(u'Tema Dana', FEEDPR + 'tema-dana/' + FEEDSU),
(u'Hronika' , FEEDPR + 'hronika/' + FEEDSU),
(u'Društvo' , FEEDPR + 'drustvo/' + FEEDSU),
(u'Ekonomija', FEEDPR + 'ekonomija/' + FEEDSU),
(u'Srbija' , FEEDPR + 'srbija/' + FEEDSU),
(u'Beograd' , FEEDPR + 'beograd/' + FEEDSU),
(u'Kultura' , FEEDPR + 'kultura/' + FEEDSU),
(u'Zabava' , FEEDPR + 'zabava/' + FEEDSU),
(u'Sport' , FEEDPR + 'sport/' + FEEDSU),
(u'Svet' , FEEDPR + 'svet/' + FEEDSU),
(u'Porodica' , FEEDPR + 'porodica/' + FEEDSU),
(u'Vremeplov', FEEDPR + 'vremeplov/' + FEEDSU),
(u'IT' , FEEDPR + 'it/' + FEEDSU),
(u'Republika Srpska', FEEDPR + 'republika-srpska/' + FEEDSU),
(u'Crna Gora', FEEDPR + 'crna-gora/' + FEEDSU),
(u'EX YU' , FEEDPR + 'eks-ju/' + FEEDSU),
(u'Dijaspora', FEEDPR + 'dijaspora/' + FEEDSU),
(u'Kolumna' , FEEDPR + 'kolumna/' + FEEDSU),
(u'Afere' , FEEDPR + 'afere/' + FEEDSU),
(u'Feljton' , FEEDPR + 'feljton/' + FEEDSU),
(u'Intervju' , FEEDPR + 'intervju/' + FEEDSU),
(u'Reportaža', FEEDPR + 'reportaza/' + FEEDSU),
(u'Zanimljivosti', FEEDPR + 'zanimljivosti/' + FEEDSU),
(u'Sa trga' , FEEDPR + 'sa-trga/' + FEEDSU)
]
def print_version(self, url):
return url + self.LANGLAT
def preprocess_raw_html(self, raw, url):
return '<html><head><title>title</title>'+raw[raw.find('</head>'):]

View File

@ -356,6 +356,10 @@ h2.library_name {
color: red; color: red;
} }
#booklist a.summary_thumb img {
border: none
}
#booklist > #pagelist { display: none; } #booklist > #pagelist { display: none; }
#goto_page_dialog ul { #goto_page_dialog ul {
@ -474,5 +478,9 @@ h2.library_name {
color: red color: red
} }
.details a.details_thumb img {
border: none
}
/* }}} */ /* }}} */

View File

@ -1,6 +1,6 @@
<div id="details_{id}" class="details"> <div id="details_{id}" class="details">
<div class="left"> <div class="left">
<img alt="Cover of {title}" src="{prefix}/get/cover/{id}" /> <a href="{get_url}" title="Click to read {title} in the {fmt} format" class="details_thumb"><img alt="Cover of {title}" src="{prefix}/get/cover/{id}" /></a>
</div> </div>
<div class="right"> <div class="right">
<div class="field formats">{formats}</div> <div class="field formats">{formats}</div>

View File

@ -1,6 +1,6 @@
<div id="summary_{id}" class="summary"> <div id="summary_{id}" class="summary">
<div class="left"> <div class="left">
<img alt="Cover of {title}" src="{prefix}/get/thumb_90_120/{id}" /> <a href="{get_url}" class="summary_thumb" title="Click to read {title} in the {fmt} format"><img alt="Cover of {title}" src="{prefix}/get/thumb_90_120/{id}" /></a>
{get_button} {get_button}
</div> </div>
<div class="right"> <div class="right">

View File

@ -12,14 +12,14 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-" "Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n" "devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2013-01-19 20:28+0000\n" "PO-Revision-Date: 2013-02-19 18:01+0000\n"
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n" "Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
"Language-Team: Catalan <linux@softcatala.org>\n" "Language-Team: Catalan <linux@softcatala.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2013-01-20 04:36+0000\n" "X-Launchpad-Export-Date: 2013-02-20 04:50+0000\n"
"X-Generator: Launchpad (build 16430)\n" "X-Generator: Launchpad (build 16491)\n"
"Language: ca\n" "Language: ca\n"
#. name for aaa #. name for aaa
@ -1920,7 +1920,7 @@ msgstr "Arára; Mato Grosso"
#. name for axk #. name for axk
msgid "Yaka (Central African Republic)" msgid "Yaka (Central African Republic)"
msgstr "Yaka (República Centreafricana)" msgstr "Yaka (República Centrafricana)"
#. name for axm #. name for axm
msgid "Armenian; Middle" msgid "Armenian; Middle"
@ -3528,7 +3528,7 @@ msgstr "Buamu"
#. name for boy #. name for boy
msgid "Bodo (Central African Republic)" msgid "Bodo (Central African Republic)"
msgstr "Bodo (República Centreafricana)" msgstr "Bodo (República Centrafricana)"
#. name for boz #. name for boz
msgid "Bozo; Tiéyaxo" msgid "Bozo; Tiéyaxo"
@ -7928,7 +7928,7 @@ msgstr "Oromo; occidental"
#. name for gba #. name for gba
msgid "Gbaya (Central African Republic)" msgid "Gbaya (Central African Republic)"
msgstr "Gbaya (República Centreafricana)" msgstr "Gbaya (República Centrafricana)"
#. name for gbb #. name for gbb
msgid "Kaytetye" msgid "Kaytetye"
@ -11184,7 +11184,7 @@ msgstr ""
#. name for kbn #. name for kbn
msgid "Kare (Central African Republic)" msgid "Kare (Central African Republic)"
msgstr "Kare (República Centreafricana)" msgstr "Kare (República Centrafricana)"
#. name for kbo #. name for kbo
msgid "Keliko" msgid "Keliko"
@ -20720,7 +20720,7 @@ msgstr "Pitjantjatjara"
#. name for pka #. name for pka
msgid "Prākrit; Ardhamāgadhī" msgid "Prākrit; Ardhamāgadhī"
msgstr "" msgstr "Pràcrit; Ardhamagadhi"
#. name for pkb #. name for pkb
msgid "Pokomo" msgid "Pokomo"
@ -20776,31 +20776,31 @@ msgstr "Polonombauk"
#. name for plc #. name for plc
msgid "Palawano; Central" msgid "Palawano; Central"
msgstr "" msgstr "Palawà; Central"
#. name for pld #. name for pld
msgid "Polari" msgid "Polari"
msgstr "" msgstr "Polari"
#. name for ple #. name for ple
msgid "Palu'e" msgid "Palu'e"
msgstr "" msgstr "Palue"
#. name for plg #. name for plg
msgid "Pilagá" msgid "Pilagá"
msgstr "" msgstr "Pilagà"
#. name for plh #. name for plh
msgid "Paulohi" msgid "Paulohi"
msgstr "" msgstr "Paulohi"
#. name for pli #. name for pli
msgid "Pali" msgid "Pali"
msgstr "" msgstr "Pali"
#. name for plj #. name for plj
msgid "Polci" msgid "Polci"
msgstr "" msgstr "Polci"
#. name for plk #. name for plk
msgid "Shina; Kohistani" msgid "Shina; Kohistani"
@ -20812,19 +20812,19 @@ msgstr "Palaung; Shwe"
#. name for pln #. name for pln
msgid "Palenquero" msgid "Palenquero"
msgstr "" msgstr "Palenquero"
#. name for plo #. name for plo
msgid "Popoluca; Oluta" msgid "Popoluca; Oluta"
msgstr "" msgstr "Popoluca; Oluta"
#. name for plp #. name for plp
msgid "Palpa" msgid "Palpa"
msgstr "" msgstr "Palpa"
#. name for plq #. name for plq
msgid "Palaic" msgid "Palaic"
msgstr "" msgstr "Palaic"
#. name for plr #. name for plr
msgid "Senoufo; Palaka" msgid "Senoufo; Palaka"
@ -20840,15 +20840,15 @@ msgstr "Malgaix; Plateau"
#. name for plu #. name for plu
msgid "Palikúr" msgid "Palikúr"
msgstr "" msgstr "Palikur"
#. name for plv #. name for plv
msgid "Palawano; Southwest" msgid "Palawano; Southwest"
msgstr "" msgstr "Palawà; Sudoccidental"
#. name for plw #. name for plw
msgid "Palawano; Brooke's Point" msgid "Palawano; Brooke's Point"
msgstr "" msgstr "Palawà; Brooke"
#. name for ply #. name for ply
msgid "Bolyu" msgid "Bolyu"
@ -20856,43 +20856,43 @@ msgstr ""
#. name for plz #. name for plz
msgid "Paluan" msgid "Paluan"
msgstr "" msgstr "Paluà"
#. name for pma #. name for pma
msgid "Paama" msgid "Paama"
msgstr "" msgstr "Paama"
#. name for pmb #. name for pmb
msgid "Pambia" msgid "Pambia"
msgstr "" msgstr "Pambia"
#. name for pmc #. name for pmc
msgid "Palumata" msgid "Palumata"
msgstr "" msgstr "Palumata"
#. name for pme #. name for pme
msgid "Pwaamei" msgid "Pwaamei"
msgstr "" msgstr "Pwaamei"
#. name for pmf #. name for pmf
msgid "Pamona" msgid "Pamona"
msgstr "" msgstr "Pamona"
#. name for pmh #. name for pmh
msgid "Prākrit; Māhārāṣṭri" msgid "Prākrit; Māhārāṣṭri"
msgstr "" msgstr "Pràcrit; Maharastri"
#. name for pmi #. name for pmi
msgid "Pumi; Northern" msgid "Pumi; Northern"
msgstr "" msgstr "Pumi; Septentrional"
#. name for pmj #. name for pmj
msgid "Pumi; Southern" msgid "Pumi; Southern"
msgstr "" msgstr "Pumi; Meridional"
#. name for pmk #. name for pmk
msgid "Pamlico" msgid "Pamlico"
msgstr "" msgstr "Algonquí Carolina"
#. name for pml #. name for pml
msgid "Lingua Franca" msgid "Lingua Franca"
@ -20904,11 +20904,11 @@ msgstr "Pol"
#. name for pmn #. name for pmn
msgid "Pam" msgid "Pam"
msgstr "" msgstr "Pam"
#. name for pmo #. name for pmo
msgid "Pom" msgid "Pom"
msgstr "" msgstr "Pom"
#. name for pmq #. name for pmq
msgid "Pame; Northern" msgid "Pame; Northern"
@ -20916,11 +20916,11 @@ msgstr "Pame; Septentrional"
#. name for pmr #. name for pmr
msgid "Paynamar" msgid "Paynamar"
msgstr "" msgstr "Paynamar"
#. name for pms #. name for pms
msgid "Piemontese" msgid "Piemontese"
msgstr "" msgstr "Piemontès"
#. name for pmt #. name for pmt
msgid "Tuamotuan" msgid "Tuamotuan"
@ -20956,7 +20956,7 @@ msgstr "Panjabi; Occidental"
#. name for pnc #. name for pnc
msgid "Pannei" msgid "Pannei"
msgstr "" msgstr "Pannei"
#. name for pne #. name for pne
msgid "Penan; Western" msgid "Penan; Western"
@ -20964,11 +20964,11 @@ msgstr "Penan; Occidental"
#. name for png #. name for png
msgid "Pongu" msgid "Pongu"
msgstr "" msgstr "Pongu"
#. name for pnh #. name for pnh
msgid "Penrhyn" msgid "Penrhyn"
msgstr "" msgstr "Penrhyn"
#. name for pni #. name for pni
msgid "Aoheng" msgid "Aoheng"
@ -20976,27 +20976,27 @@ msgstr ""
#. name for pnm #. name for pnm
msgid "Punan Batu 1" msgid "Punan Batu 1"
msgstr "" msgstr "Punan Batu"
#. name for pnn #. name for pnn
msgid "Pinai-Hagahai" msgid "Pinai-Hagahai"
msgstr "" msgstr "Pinai-Hagahai"
#. name for pno #. name for pno
msgid "Panobo" msgid "Panobo"
msgstr "" msgstr "Panobo"
#. name for pnp #. name for pnp
msgid "Pancana" msgid "Pancana"
msgstr "" msgstr "Pancana"
#. name for pnq #. name for pnq
msgid "Pana (Burkina Faso)" msgid "Pana (Burkina Faso)"
msgstr "" msgstr "Pana (Burkina Faso)"
#. name for pnr #. name for pnr
msgid "Panim" msgid "Panim"
msgstr "" msgstr "Panim"
#. name for pns #. name for pns
msgid "Ponosakan" msgid "Ponosakan"
@ -21028,7 +21028,7 @@ msgstr ""
#. name for pnz #. name for pnz
msgid "Pana (Central African Republic)" msgid "Pana (Central African Republic)"
msgstr "" msgstr "Pana (República Centrafricana)"
#. name for poc #. name for poc
msgid "Poqomam" msgid "Poqomam"
@ -21056,7 +21056,7 @@ msgstr ""
#. name for poi #. name for poi
msgid "Popoluca; Highland" msgid "Popoluca; Highland"
msgstr "" msgstr "Popoluca; Muntanya"
#. name for pok #. name for pok
msgid "Pokangá" msgid "Pokangá"
@ -21084,7 +21084,7 @@ msgstr ""
#. name for poq #. name for poq
msgid "Popoluca; Texistepec" msgid "Popoluca; Texistepec"
msgstr "" msgstr "Popoluca; Texistepec"
#. name for por #. name for por
msgid "Portuguese" msgid "Portuguese"
@ -21092,7 +21092,7 @@ msgstr "Portuguès"
#. name for pos #. name for pos
msgid "Popoluca; Sayula" msgid "Popoluca; Sayula"
msgstr "" msgstr "Popoluca; Sayula"
#. name for pot #. name for pot
msgid "Potawatomi" msgid "Potawatomi"
@ -21336,7 +21336,7 @@ msgstr "Paixtú; Central"
#. name for psu #. name for psu
msgid "Prākrit; Sauraseni" msgid "Prākrit; Sauraseni"
msgstr "" msgstr "Pràcrit; Sauraseni"
#. name for psw #. name for psw
msgid "Port Sandwich" msgid "Port Sandwich"

View File

@ -10,19 +10,19 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-" "Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n" "devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2011-09-27 16:52+0000\n" "PO-Revision-Date: 2013-02-18 02:41+0000\n"
"Last-Translator: Kovid Goyal <Unknown>\n" "Last-Translator: pedro jorge oliveira <pedrojorgeoliveira93@gmail.com>\n"
"Language-Team: Portuguese <pt@li.org>\n" "Language-Team: Portuguese <pt@li.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2011-11-26 05:34+0000\n" "X-Launchpad-Export-Date: 2013-02-19 04:56+0000\n"
"X-Generator: Launchpad (build 14381)\n" "X-Generator: Launchpad (build 16491)\n"
"Language: pt\n" "Language: pt\n"
#. name for aaa #. name for aaa
msgid "Ghotuo" msgid "Ghotuo"
msgstr "" msgstr "Ghotuo"
#. name for aab #. name for aab
msgid "Alumu-Tesu" msgid "Alumu-Tesu"
@ -498,7 +498,7 @@ msgstr ""
#. name for afr #. name for afr
msgid "Afrikaans" msgid "Afrikaans"
msgstr "Africanos" msgstr "Africano"
#. name for afs #. name for afs
msgid "Creole; Afro-Seminole" msgid "Creole; Afro-Seminole"
@ -910,7 +910,7 @@ msgstr ""
#. name for ale #. name for ale
msgid "Aleut" msgid "Aleut"
msgstr "aleúte" msgstr "Aleúte"
#. name for alf #. name for alf
msgid "Alege" msgid "Alege"
@ -30818,7 +30818,7 @@ msgstr ""
#. name for zxx #. name for zxx
msgid "No linguistic content" msgid "No linguistic content"
msgstr "" msgstr "Sem conteúdo linguistico"
#. name for zyb #. name for zyb
msgid "Zhuang; Yongbei" msgid "Zhuang; Yongbei"

View File

@ -9,14 +9,14 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-" "Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n" "devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2012-12-21 03:31+0000\n" "PO-Revision-Date: 2013-02-17 21:57+0000\n"
"Last-Translator: Fábio Malcher Miranda <mirand863@hotmail.com>\n" "Last-Translator: Neliton Pereira Jr. <nelitonpjr@gmail.com>\n"
"Language-Team: Brazilian Portuguese\n" "Language-Team: Brazilian Portuguese\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2012-12-22 04:59+0000\n" "X-Launchpad-Export-Date: 2013-02-18 04:49+0000\n"
"X-Generator: Launchpad (build 16378)\n" "X-Generator: Launchpad (build 16491)\n"
"Language: \n" "Language: \n"
#. name for aaa #. name for aaa
@ -141,7 +141,7 @@ msgstr ""
#. name for abh #. name for abh
msgid "Arabic; Tajiki" msgid "Arabic; Tajiki"
msgstr "" msgstr "Arábico; Tajiki"
#. name for abi #. name for abi
msgid "Abidji" msgid "Abidji"

View File

@ -9,43 +9,43 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-" "Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n" "devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n" "POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2011-09-27 16:56+0000\n" "PO-Revision-Date: 2013-02-15 06:39+0000\n"
"Last-Translator: Clytie Siddall <clytie@riverland.net.au>\n" "Last-Translator: baduong <Unknown>\n"
"Language-Team: Vietnamese <gnomevi-list@lists.sourceforge.net>\n" "Language-Team: Vietnamese <gnomevi-list@lists.sourceforge.net>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n" "Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2011-11-26 05:44+0000\n" "X-Launchpad-Export-Date: 2013-02-16 04:56+0000\n"
"X-Generator: Launchpad (build 14381)\n" "X-Generator: Launchpad (build 16491)\n"
"Language: vi\n" "Language: vi\n"
#. name for aaa #. name for aaa
msgid "Ghotuo" msgid "Ghotuo"
msgstr "" msgstr "Ghotuo"
#. name for aab #. name for aab
msgid "Alumu-Tesu" msgid "Alumu-Tesu"
msgstr "" msgstr "Alumu-Tesu"
#. name for aac #. name for aac
msgid "Ari" msgid "Ari"
msgstr "" msgstr "Ari"
#. name for aad #. name for aad
msgid "Amal" msgid "Amal"
msgstr "" msgstr "Amal"
#. name for aae #. name for aae
msgid "Albanian; Arbëreshë" msgid "Albanian; Arbëreshë"
msgstr "" msgstr "An-ba-ni"
#. name for aaf #. name for aaf
msgid "Aranadan" msgid "Aranadan"
msgstr "" msgstr "Aranadan"
#. name for aag #. name for aag
msgid "Ambrak" msgid "Ambrak"
msgstr "" msgstr "Ambrak"
#. name for aah #. name for aah
msgid "Arapesh; Abu'" msgid "Arapesh; Abu'"
@ -30817,7 +30817,7 @@ msgstr ""
#. name for zxx #. name for zxx
msgid "No linguistic content" msgid "No linguistic content"
msgstr "" msgstr "Không có nội dung kiểu ngôn ngữ"
#. name for zyb #. name for zyb
msgid "Zhuang; Yongbei" msgid "Zhuang; Yongbei"
@ -30829,11 +30829,11 @@ msgstr ""
#. name for zyj #. name for zyj
msgid "Zhuang; Youjiang" msgid "Zhuang; Youjiang"
msgstr "" msgstr "Zhuang; Youjiang"
#. name for zyn #. name for zyn
msgid "Zhuang; Yongnan" msgid "Zhuang; Yongnan"
msgstr "" msgstr "Zhuang; Yongnan"
#. name for zyp #. name for zyp
msgid "Zyphe" msgid "Zyphe"

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net' __copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
__appname__ = u'calibre' __appname__ = u'calibre'
numeric_version = (0, 9, 19) numeric_version = (0, 9, 20)
__version__ = u'.'.join(map(unicode, numeric_version)) __version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>" __author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -16,15 +16,14 @@ import apsw
from calibre import isbytestring, force_unicode, prints from calibre import isbytestring, force_unicode, prints
from calibre.constants import (iswindows, filesystem_encoding, from calibre.constants import (iswindows, filesystem_encoding,
preferred_encoding) preferred_encoding)
from calibre.ptempfile import PersistentTemporaryFile, SpooledTemporaryFile from calibre.ptempfile import PersistentTemporaryFile
from calibre.db import SPOOL_SIZE
from calibre.db.schema_upgrades import SchemaUpgrade from calibre.db.schema_upgrades import SchemaUpgrade
from calibre.library.field_metadata import FieldMetadata from calibre.library.field_metadata import FieldMetadata
from calibre.ebooks.metadata import title_sort, author_to_author_sort from calibre.ebooks.metadata import title_sort, author_to_author_sort
from calibre.utils.icu import strcmp from calibre.utils.icu import strcmp
from calibre.utils.config import to_json, from_json, prefs, tweaks from calibre.utils.config import to_json, from_json, prefs, tweaks
from calibre.utils.date import utcfromtimestamp, parse_date from calibre.utils.date import utcfromtimestamp, parse_date
from calibre.utils.filenames import is_case_sensitive from calibre.utils.filenames import (is_case_sensitive, samefile, hardlink_file)
from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable, from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable, SizeTable, FormatsTable, AuthorsTable, IdentifiersTable,
CompositeTable, LanguagesTable) CompositeTable, LanguagesTable)
@ -855,38 +854,75 @@ class DB(object):
ans = {} ans = {}
if path is not None: if path is not None:
stat = os.stat(path) stat = os.stat(path)
ans['path'] = path
ans['size'] = stat.st_size ans['size'] = stat.st_size
ans['mtime'] = utcfromtimestamp(stat.st_mtime) ans['mtime'] = utcfromtimestamp(stat.st_mtime)
return ans return ans
def cover(self, path, as_file=False, as_image=False, def has_format(self, book_id, fmt, fname, path):
as_path=False): return self.format_abspath(book_id, fmt, fname, path) is not None
def copy_cover_to(self, path, dest, windows_atomic_move=None, use_hardlink=False):
path = os.path.join(self.library_path, path, 'cover.jpg') path = os.path.join(self.library_path, path, 'cover.jpg')
ret = None if windows_atomic_move is not None:
if os.access(path, os.R_OK): if not isinstance(dest, basestring):
try: raise Exception("Error, you must pass the dest as a path when"
" using windows_atomic_move")
if os.access(path, os.R_OK) and dest and not samefile(dest, path):
windows_atomic_move.copy_path_to(path, dest)
return True
else:
if os.access(path, os.R_OK):
try:
f = lopen(path, 'rb')
except (IOError, OSError):
time.sleep(0.2)
f = lopen(path, 'rb') f = lopen(path, 'rb')
except (IOError, OSError): with f:
time.sleep(0.2) if hasattr(dest, 'write'):
f = lopen(path, 'rb') shutil.copyfileobj(f, dest)
with f: if hasattr(dest, 'flush'):
if as_path: dest.flush()
pt = PersistentTemporaryFile('_dbcover.jpg') return True
with pt: elif dest and not samefile(dest, path):
shutil.copyfileobj(f, pt) if use_hardlink:
return pt.name try:
if as_file: hardlink_file(path, dest)
ret = SpooledTemporaryFile(SPOOL_SIZE) return True
shutil.copyfileobj(f, ret) except:
ret.seek(0) pass
else: with lopen(dest, 'wb') as d:
ret = f.read() shutil.copyfileobj(f, d)
if as_image: return True
from PyQt4.Qt import QImage return False
i = QImage()
i.loadFromData(ret) def copy_format_to(self, book_id, fmt, fname, path, dest,
ret = i windows_atomic_move=None, use_hardlink=False):
return ret path = self.format_abspath(book_id, fmt, fname, path)
if path is None:
return False
if windows_atomic_move is not None:
if not isinstance(dest, basestring):
raise Exception("Error, you must pass the dest as a path when"
" using windows_atomic_move")
if dest and not samefile(dest, path):
windows_atomic_move.copy_path_to(path, dest)
else:
if hasattr(dest, 'write'):
with lopen(path, 'rb') as f:
shutil.copyfileobj(f, dest)
if hasattr(dest, 'flush'):
dest.flush()
elif dest and not samefile(dest, path):
if use_hardlink:
try:
hardlink_file(path, dest)
return True
except:
pass
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
shutil.copyfileobj(f, d)
return True
# }}} # }}}

View File

@ -8,16 +8,21 @@ __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import os, traceback import os, traceback
from io import BytesIO
from collections import defaultdict from collections import defaultdict
from functools import wraps, partial from functools import wraps, partial
from calibre.db import SPOOL_SIZE
from calibre.db.categories import get_categories from calibre.db.categories import get_categories
from calibre.db.locking import create_locks, RecordLock from calibre.db.locking import create_locks, RecordLock
from calibre.db.errors import NoSuchFormat
from calibre.db.fields import create_field from calibre.db.fields import create_field
from calibre.db.search import Search from calibre.db.search import Search
from calibre.db.tables import VirtualTable from calibre.db.tables import VirtualTable
from calibre.db.lazy import FormatMetadata, FormatsList from calibre.db.lazy import FormatMetadata, FormatsList
from calibre.ebooks.metadata.book.base import Metadata from calibre.ebooks.metadata.book.base import Metadata
from calibre.ptempfile import (base_dir, PersistentTemporaryFile,
SpooledTemporaryFile)
from calibre.utils.date import now from calibre.utils.date import now
from calibre.utils.icu import sort_key from calibre.utils.icu import sort_key
@ -103,27 +108,6 @@ class Cache(object):
def field_metadata(self): def field_metadata(self):
return self.backend.field_metadata return self.backend.field_metadata
def _format_abspath(self, book_id, fmt):
'''
Return absolute path to the ebook file of format `format`
WARNING: This method will return a dummy path for a network backend DB,
so do not rely on it, use format(..., as_path=True) instead.
Currently used only in calibredb list, the viewer and the catalogs (via
get_data_as_dict()).
Apart from the viewer, I don't believe any of the others do any file
I/O with the results of this call.
'''
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return None
if name and path:
return self.backend.format_abspath(book_id, fmt, name, path)
def _get_metadata(self, book_id, get_user_categories=True): # {{{ def _get_metadata(self, book_id, get_user_categories=True): # {{{
mi = Metadata(None, template_cache=self.formatter_template_cache) mi = Metadata(None, template_cache=self.formatter_template_cache)
author_ids = self._field_ids_for('authors', book_id) author_ids = self._field_ids_for('authors', book_id)
@ -162,7 +146,7 @@ class Cache(object):
if not formats: if not formats:
good_formats = None good_formats = None
else: else:
mi.format_metadata = FormatMetadata(self, id, formats) mi.format_metadata = FormatMetadata(self, book_id, formats)
good_formats = FormatsList(formats, mi.format_metadata) good_formats = FormatsList(formats, mi.format_metadata)
mi.formats = good_formats mi.formats = good_formats
mi.has_cover = _('Yes') if self._field_for('cover', book_id, mi.has_cover = _('Yes') if self._field_for('cover', book_id,
@ -397,15 +381,184 @@ class Cache(object):
:param as_path: If True return the image as a path pointing to a :param as_path: If True return the image as a path pointing to a
temporary file temporary file
''' '''
if as_file:
ret = SpooledTemporaryFile(SPOOL_SIZE)
if not self.copy_cover_to(book_id, ret): return
ret.seek(0)
elif as_path:
pt = PersistentTemporaryFile('_dbcover.jpg')
with pt:
if not self.copy_cover_to(book_id, pt): return
ret = pt.name
else:
buf = BytesIO()
if not self.copy_cover_to(book_id, buf): return
ret = buf.getvalue()
if as_image:
from PyQt4.Qt import QImage
i = QImage()
i.loadFromData(ret)
ret = i
return ret
@api
def copy_cover_to(self, book_id, dest, use_hardlink=False):
'''
Copy the cover to the file like object ``dest``. Returns False
if no cover exists or dest is the same file as the current cover.
dest can also be a path in which case the cover is
copied to it iff the path is different from the current path (taking
case sensitivity into account).
'''
with self.read_lock: with self.read_lock:
try: try:
path = self._field_for('path', book_id).replace('/', os.sep) path = self._field_for('path', book_id).replace('/', os.sep)
except: except:
return None return False
with self.record_lock.lock(book_id): with self.record_lock.lock(book_id):
return self.backend.cover(path, as_file=as_file, as_image=as_image, return self.backend.copy_cover_to(path, dest,
as_path=as_path) use_hardlink=use_hardlink)
@api
def copy_format_to(self, book_id, fmt, dest, use_hardlink=False):
'''
Copy the format ``fmt`` to the file like object ``dest``. If the
specified format does not exist, raises :class:`NoSuchFormat` error.
dest can also be a path, in which case the format is copied to it, iff
the path is different from the current path (taking case sensitivity
into account).
'''
with self.read_lock:
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
raise NoSuchFormat('Record %d has no %s file'%(book_id, fmt))
with self.record_lock.lock(book_id):
return self.backend.copy_format_to(book_id, fmt, name, path, dest,
use_hardlink=use_hardlink)
@read_api
def format_abspath(self, book_id, fmt):
'''
Return absolute path to the ebook file of format `format`
Currently used only in calibredb list, the viewer and the catalogs (via
get_data_as_dict()).
Apart from the viewer, I don't believe any of the others do any file
I/O with the results of this call.
'''
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return None
if name and path:
return self.backend.format_abspath(book_id, fmt, name, path)
@read_api
def has_format(self, book_id, fmt):
'Return True iff the format exists on disk'
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return False
return self.backend.has_format(book_id, fmt, name, path)
@read_api
def formats(self, book_id, verify_formats=True):
'''
Return tuple of all formats for the specified book. If verify_formats
is True, verifies that the files exist on disk.
'''
ans = self.field_for('formats', book_id)
if verify_formats and ans:
try:
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return ()
def verify(fmt):
try:
name = self.fields['formats'].format_fname(book_id, fmt)
except:
return False
return self.backend.has_format(book_id, fmt, name, path)
ans = tuple(x for x in ans if verify(x))
return ans
@api
def format(self, book_id, fmt, as_file=False, as_path=False, preserve_filename=False):
'''
Return the ebook format as a bytestring or `None` if the format doesn't exist,
or we don't have permission to write to the ebook file.
:param as_file: If True the ebook format is returned as a file object. Note
that the file object is a SpooledTemporaryFile, so if what you want to
do is copy the format to another file, use :method:`copy_format_to`
instead for performance.
:param as_path: Copies the format file to a temp file and returns the
path to the temp file
:param preserve_filename: If True and returning a path the filename is
the same as that used in the library. Note that using
this means that repeated calls yield the same
temp file (which is re-created each time)
'''
with self.read_lock:
ext = ('.'+fmt.lower()) if fmt else ''
try:
fname = self.fields['formats'].format_fname(book_id, fmt)
except:
return None
fname += ext
if as_path:
if preserve_filename:
bd = base_dir()
d = os.path.join(bd, 'format_abspath')
try:
os.makedirs(d)
except:
pass
ret = os.path.join(d, fname)
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, ret)
except NoSuchFormat:
return None
else:
with PersistentTemporaryFile(ext) as pt, self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, pt)
except NoSuchFormat:
return None
ret = pt.name
elif as_file:
ret = SpooledTemporaryFile(SPOOL_SIZE)
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, ret)
except NoSuchFormat:
return None
ret.seek(0)
# Various bits of code try to use the name as the default
# title when reading metadata, so set it
ret.name = fname
else:
buf = BytesIO()
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, buf)
except NoSuchFormat:
return None
ret = buf.getvalue()
return ret
@read_api @read_api
def multisort(self, fields, ids_to_sort=None): def multisort(self, fields, ids_to_sort=None):
@ -455,6 +608,14 @@ class Cache(object):
return get_categories(self, sort=sort, book_ids=book_ids, return get_categories(self, sort=sort, book_ids=book_ids,
icon_map=icon_map) icon_map=icon_map)
@write_api
def set_field(self, name, book_id_to_val_map):
# TODO: Specialize title/authors to also update path
# TODO: Handle updating caches used by composite fields
dirtied = self.fields[name].writer.set_books(
book_id_to_val_map, self.backend)
return dirtied
# }}} # }}}
class SortKey(object): class SortKey(object):

View File

@ -12,6 +12,7 @@ from functools import partial
from operator import attrgetter from operator import attrgetter
from future_builtins import map from future_builtins import map
from calibre.ebooks.metadata import author_to_author_sort
from calibre.library.field_metadata import TagsIcons from calibre.library.field_metadata import TagsIcons
from calibre.utils.config_base import tweaks from calibre.utils.config_base import tweaks
from calibre.utils.icu import sort_key from calibre.utils.icu import sort_key
@ -149,8 +150,16 @@ def get_categories(dbcache, sort='name', book_ids=None, icon_map=None):
elif category == 'news': elif category == 'news':
cats = dbcache.fields['tags'].get_news_category(tag_class, book_ids) cats = dbcache.fields['tags'].get_news_category(tag_class, book_ids)
else: else:
cat = fm[category]
brm = book_rating_map
if cat['datatype'] == 'rating' and category != 'rating':
brm = dbcache.fields[category].book_value_map
cats = dbcache.fields[category].get_categories( cats = dbcache.fields[category].get_categories(
tag_class, book_rating_map, lang_map, book_ids) tag_class, brm, lang_map, book_ids)
if (category != 'authors' and cat['datatype'] == 'text' and
cat['is_multiple'] and cat['display'].get('is_names', False)):
for item in cats:
item.sort = author_to_author_sort(item.sort)
sort_categories(cats, sort) sort_categories(cats, sort)
categories[category] = cats categories[category] = cats

View File

@ -12,6 +12,7 @@ from threading import Lock
from collections import defaultdict, Counter from collections import defaultdict, Counter
from calibre.db.tables import ONE_ONE, MANY_ONE, MANY_MANY from calibre.db.tables import ONE_ONE, MANY_ONE, MANY_MANY
from calibre.db.write import Writer
from calibre.ebooks.metadata import title_sort from calibre.ebooks.metadata import title_sort
from calibre.utils.config_base import tweaks from calibre.utils.config_base import tweaks
from calibre.utils.icu import sort_key from calibre.utils.icu import sort_key
@ -44,6 +45,7 @@ class Field(object):
self.category_formatter = lambda x:'\u2605'*int(x/2) self.category_formatter = lambda x:'\u2605'*int(x/2)
elif name == 'languages': elif name == 'languages':
self.category_formatter = calibre_langcode_to_name self.category_formatter = calibre_langcode_to_name
self.writer = Writer(self)
@property @property
def metadata(self): def metadata(self):

View File

@ -7,19 +7,36 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>' __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import unittest, os, shutil import unittest, os, shutil, tempfile, atexit
from functools import partial
from io import BytesIO
from future_builtins import map from future_builtins import map
rmtree = partial(shutil.rmtree, ignore_errors=True)
class BaseTest(unittest.TestCase): class BaseTest(unittest.TestCase):
def setUp(self):
self.library_path = self.mkdtemp()
self.create_db(self.library_path)
def tearDown(self):
shutil.rmtree(self.library_path)
def create_db(self, library_path): def create_db(self, library_path):
from calibre.library.database2 import LibraryDatabase2 from calibre.library.database2 import LibraryDatabase2
if LibraryDatabase2.exists_at(library_path): if LibraryDatabase2.exists_at(library_path):
raise ValueError('A library already exists at %r'%library_path) raise ValueError('A library already exists at %r'%library_path)
src = os.path.join(os.path.dirname(__file__), 'metadata.db') src = os.path.join(os.path.dirname(__file__), 'metadata.db')
db = os.path.join(library_path, 'metadata.db') dest = os.path.join(library_path, 'metadata.db')
shutil.copyfile(src, db) shutil.copyfile(src, dest)
return db db = LibraryDatabase2(library_path)
db.set_cover(1, I('lt.png', data=True))
db.set_cover(2, I('polish.png', data=True))
db.add_format(1, 'FMT1', BytesIO(b'book1fmt1'), index_is_id=True)
db.add_format(1, 'FMT2', BytesIO(b'book1fmt2'), index_is_id=True)
db.add_format(2, 'FMT1', BytesIO(b'book2fmt1'), index_is_id=True)
return dest
def init_cache(self, library_path): def init_cache(self, library_path):
from calibre.db.backend import DB from calibre.db.backend import DB
@ -29,20 +46,38 @@ class BaseTest(unittest.TestCase):
cache.init() cache.init()
return cache return cache
def mkdtemp(self):
ans = tempfile.mkdtemp(prefix='db_test_')
atexit.register(rmtree, ans)
return ans
def init_old(self, library_path):
from calibre.library.database2 import LibraryDatabase2
return LibraryDatabase2(library_path)
def clone_library(self, library_path):
if not hasattr(self, 'clone_dir'):
self.clone_dir = tempfile.mkdtemp()
atexit.register(rmtree, self.clone_dir)
self.clone_count = 0
self.clone_count += 1
dest = os.path.join(self.clone_dir, str(self.clone_count))
shutil.copytree(library_path, dest)
return dest
def compare_metadata(self, mi1, mi2): def compare_metadata(self, mi1, mi2):
allfk1 = mi1.all_field_keys() allfk1 = mi1.all_field_keys()
allfk2 = mi2.all_field_keys() allfk2 = mi2.all_field_keys()
self.assertEqual(allfk1, allfk2) self.assertEqual(allfk1, allfk2)
all_keys = {'format_metadata', 'id', 'application_id', all_keys = {'format_metadata', 'id', 'application_id',
'author_sort_map', 'author_link_map', 'book_size', 'author_sort_map', 'author_link_map', 'book_size',
'ondevice_col', 'last_modified'}.union(allfk1) 'ondevice_col', 'last_modified', 'has_cover',
'cover_data'}.union(allfk1)
for attr in all_keys: for attr in all_keys:
if attr == 'user_metadata': continue if attr == 'user_metadata': continue
if attr == 'format_metadata': continue # TODO: Not implemented yet
attr1, attr2 = getattr(mi1, attr), getattr(mi2, attr) attr1, attr2 = getattr(mi1, attr), getattr(mi2, attr)
if attr == 'formats': if attr == 'formats':
continue # TODO: Not implemented yet
attr1, attr2 = map(lambda x:tuple(x) if x else (), (attr1, attr2)) attr1, attr2 = map(lambda x:tuple(x) if x else (), (attr1, attr2))
self.assertEqual(attr1, attr2, self.assertEqual(attr1, attr2,
'%s not the same: %r != %r'%(attr, attr1, attr2)) '%s not the same: %r != %r'%(attr, attr1, attr2))

View File

@ -7,21 +7,13 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>' __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import shutil, unittest, tempfile, datetime import unittest, datetime
from cStringIO import StringIO
from calibre.utils.date import utc_tz from calibre.utils.date import utc_tz
from calibre.db.tests.base import BaseTest from calibre.db.tests.base import BaseTest
class ReadingTest(BaseTest): class ReadingTest(BaseTest):
def setUp(self):
self.library_path = tempfile.mkdtemp()
self.create_db(self.library_path)
def tearDown(self):
shutil.rmtree(self.library_path)
def test_read(self): # {{{ def test_read(self): # {{{
'Test the reading of data from the database' 'Test the reading of data from the database'
cache = self.init_cache(self.library_path) cache = self.init_cache(self.library_path)
@ -55,7 +47,7 @@ class ReadingTest(BaseTest):
'#tags':(), '#tags':(),
'#yesno':None, '#yesno':None,
'#comments': None, '#comments': None,
'size':None,
}, },
2 : { 2 : {
@ -66,7 +58,7 @@ class ReadingTest(BaseTest):
'series' : 'A Series One', 'series' : 'A Series One',
'series_index': 1.0, 'series_index': 1.0,
'tags':('Tag One', 'Tag Two'), 'tags':('Tag One', 'Tag Two'),
'formats': (), 'formats': ('FMT1',),
'rating': 4.0, 'rating': 4.0,
'identifiers': {'test':'one'}, 'identifiers': {'test':'one'},
'timestamp': datetime.datetime(2011, 9, 5, 21, 6, 'timestamp': datetime.datetime(2011, 9, 5, 21, 6,
@ -86,6 +78,7 @@ class ReadingTest(BaseTest):
'#tags':('My Tag One', 'My Tag Two'), '#tags':('My Tag One', 'My Tag Two'),
'#yesno':True, '#yesno':True,
'#comments': '<div>My Comments One<p></p></div>', '#comments': '<div>My Comments One<p></p></div>',
'size':9,
}, },
1 : { 1 : {
'title': 'Title Two', 'title': 'Title Two',
@ -96,7 +89,7 @@ class ReadingTest(BaseTest):
'series_index': 2.0, 'series_index': 2.0,
'rating': 6.0, 'rating': 6.0,
'tags': ('Tag One', 'News'), 'tags': ('Tag One', 'News'),
'formats':(), 'formats':('FMT1', 'FMT2'),
'identifiers': {'test':'two'}, 'identifiers': {'test':'two'},
'timestamp': datetime.datetime(2011, 9, 6, 6, 0, 'timestamp': datetime.datetime(2011, 9, 6, 6, 0,
tzinfo=utc_tz), tzinfo=utc_tz),
@ -115,6 +108,7 @@ class ReadingTest(BaseTest):
'#tags':('My Tag Two',), '#tags':('My Tag Two',),
'#yesno':False, '#yesno':False,
'#comments': '<div>My Comments Two<p></p></div>', '#comments': '<div>My Comments Two<p></p></div>',
'size':9,
}, },
} }
@ -172,22 +166,41 @@ class ReadingTest(BaseTest):
'Test get_metadata() returns the same data for both backends' 'Test get_metadata() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2 from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path) old = LibraryDatabase2(self.library_path)
for i in xrange(1, 3): old_metadata = {i:old.get_metadata(
old.add_format(i, 'txt%d'%i, StringIO(b'random%d'%i), i, index_is_id=True, get_cover=True, cover_as_data=True) for i in
index_is_id=True)
old.add_format(i, 'text%d'%i, StringIO(b'random%d'%i),
index_is_id=True)
old_metadata = {i:old.get_metadata(i, index_is_id=True) for i in
xrange(1, 4)} xrange(1, 4)}
for mi in old_metadata.itervalues():
mi.format_metadata = dict(mi.format_metadata)
if mi.formats:
mi.formats = tuple(mi.formats)
old = None old = None
cache = self.init_cache(self.library_path) cache = self.init_cache(self.library_path)
new_metadata = {i:cache.get_metadata(i) for i in xrange(1, 4)} new_metadata = {i:cache.get_metadata(
i, get_cover=True, cover_as_data=True) for i in xrange(1, 4)}
cache = None cache = None
for mi2, mi1 in zip(new_metadata.values(), old_metadata.values()): for mi2, mi1 in zip(new_metadata.values(), old_metadata.values()):
self.compare_metadata(mi1, mi2) self.compare_metadata(mi1, mi2)
# }}}
def test_get_cover(self): # {{{
'Test cover() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
covers = {i: old.cover(i, index_is_id=True) for i in old.all_ids()}
old = None
cache = self.init_cache(self.library_path)
for book_id, cdata in covers.iteritems():
self.assertEqual(cdata, cache.cover(book_id), 'Reading of cover failed')
f = cache.cover(book_id, as_file=True)
self.assertEqual(cdata, f.read() if f else f, 'Reading of cover as file failed')
if cdata:
with open(cache.cover(book_id, as_path=True), 'rb') as f:
self.assertEqual(cdata, f.read(), 'Reading of cover as path failed')
else:
self.assertEqual(cdata, cache.cover(book_id, as_path=True),
'Reading of null cover as path failed')
# }}} # }}}
@ -227,8 +240,12 @@ class ReadingTest(BaseTest):
# User categories # User categories
'@Good Authors:One', '@Good Series.good tags:two', '@Good Authors:One', '@Good Series.good tags:two',
# TODO: Tests for searching the size and #formats columns and # Cover/Formats
# cover:true|false 'cover:true', 'cover:false', 'formats:true', 'formats:false',
'formats:#>1', 'formats:#=1', 'formats:=fmt1', 'formats:=fmt2',
'formats:=fmt1 or formats:fmt2', '#formats:true', '#formats:false',
'#formats:fmt1', '#formats:fmt2', '#formats:fmt1 and #formats:fmt2',
)} )}
old = None old = None
@ -247,9 +264,67 @@ class ReadingTest(BaseTest):
old = LibraryDatabase2(self.library_path) old = LibraryDatabase2(self.library_path)
old_categories = old.get_categories() old_categories = old.get_categories()
cache = self.init_cache(self.library_path) cache = self.init_cache(self.library_path)
import pprint new_categories = cache.get_categories()
pprint.pprint(old_categories) self.assertEqual(set(old_categories), set(new_categories),
pprint.pprint(cache.get_categories()) 'The set of old categories is not the same as the set of new categories')
def compare_category(category, old, new):
for attr in ('name', 'original_name', 'id', 'count',
'is_hierarchical', 'is_editable', 'is_searchable',
'id_set', 'avg_rating', 'sort', 'use_sort_as_name',
'tooltip', 'icon', 'category'):
oval, nval = getattr(old, attr), getattr(new, attr)
if (
(category in {'rating', '#rating'} and attr in {'id_set', 'sort'}) or
(category == 'series' and attr == 'sort') or # Sorting is wrong in old
(category == 'identifiers' and attr == 'id_set') or
(category == '@Good Series') or # Sorting is wrong in old
(category == 'news' and attr in {'count', 'id_set'}) or
(category == 'formats' and attr == 'id_set')
):
continue
self.assertEqual(oval, nval,
'The attribute %s for %s in category %s does not match. Old is %r, New is %r'
%(attr, old.name, category, oval, nval))
for category in old_categories:
old, new = old_categories[category], new_categories[category]
self.assertEqual(len(old), len(new),
'The number of items in the category %s is not the same'%category)
for o, n in zip(old, new):
compare_category(category, o, n)
# }}}
def test_get_formats(self): # {{{
'Test reading ebook formats using the format() method'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
ids = old.all_ids()
lf = {i:set(old.formats(i, index_is_id=True).split(',')) if old.formats(
i, index_is_id=True) else set() for i in ids}
formats = {i:{f:old.format(i, f, index_is_id=True) for f in fmts} for
i, fmts in lf.iteritems()}
old = None
cache = self.init_cache(self.library_path)
for book_id, fmts in lf.iteritems():
self.assertEqual(fmts, set(cache.formats(book_id)),
'Set of formats is not the same')
for fmt in fmts:
old = formats[book_id][fmt]
self.assertEqual(old, cache.format(book_id, fmt),
'Old and new format disagree')
f = cache.format(book_id, fmt, as_file=True)
self.assertEqual(old, f.read(),
'Failed to read format as file')
with open(cache.format(book_id, fmt, as_path=True,
preserve_filename=True), 'rb') as f:
self.assertEqual(old, f.read(),
'Failed to read format as path')
with open(cache.format(book_id, fmt, as_path=True), 'rb') as f:
self.assertEqual(old, f.read(),
'Failed to read format as path')
# }}} # }}}

View File

@ -0,0 +1,92 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import unittest
from collections import namedtuple
from functools import partial
from calibre.utils.date import UNDEFINED_DATE
from calibre.db.tests.base import BaseTest
class WritingTest(BaseTest):
@property
def cloned_library(self):
return self.clone_library(self.library_path)
def create_getter(self, name, getter=None):
if getter is None:
ans = lambda db:partial(db.get_custom, label=name[1:],
index_is_id=True)
else:
ans = lambda db:partial(getattr(db, getter), index_is_id=True)
return ans
def create_setter(self, name, setter=None):
if setter is None:
ans = lambda db:partial(db.set_custom, label=name[1:], commit=True)
else:
ans = lambda db:partial(getattr(db, setter), commit=True)
return ans
def create_test(self, name, vals, getter=None, setter=None ):
T = namedtuple('Test', 'name vals getter setter')
return T(name, vals, self.create_getter(name, getter),
self.create_setter(name, setter))
def run_tests(self, tests):
cl = self.cloned_library
results = {}
for test in tests:
results[test] = []
for val in test.vals:
cache = self.init_cache(cl)
cache.set_field(test.name, {1: val})
cached_res = cache.field_for(test.name, 1)
del cache
db = self.init_old(cl)
getter = test.getter(db)
sqlite_res = getter(1)
test.setter(db)(1, val)
old_cached_res = getter(1)
self.assertEqual(old_cached_res, cached_res,
'Failed setting for %s with value %r, cached value not the same. Old: %r != New: %r'%(
test.name, val, old_cached_res, cached_res))
db.refresh()
old_sqlite_res = getter(1)
self.assertEqual(old_sqlite_res, sqlite_res,
'Failed setting for %s, sqlite value not the same: %r != %r'%(
test.name, old_sqlite_res, sqlite_res))
del db
def test_one_one(self):
'Test setting of values in one-one fields'
tests = []
for name, getter, setter in (
('pubdate', 'pubdate', 'set_pubdate'),
('timestamp', 'timestamp', 'set_timestamp'),
('#date', None, None),
):
tests.append(self.create_test(
name, ('2011-1-12', UNDEFINED_DATE, None), getter, setter))
self.run_tests(tests)
def tests():
return unittest.TestLoader().loadTestsFromTestCase(WritingTest)
def run():
unittest.TextTestRunner(verbosity=2).run(tests())
if __name__ == '__main__':
run()

167
src/calibre/db/write.py Normal file
View File

@ -0,0 +1,167 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from functools import partial
from datetime import datetime
from calibre.constants import preferred_encoding, ispy3
from calibre.utils.date import (parse_only_date, parse_date, UNDEFINED_DATE,
isoformat)
# Convert data into values suitable for the db {{{
if ispy3:
unicode = str
def single_text(x):
if x is None:
return x
if not isinstance(x, unicode):
x = x.decode(preferred_encoding, 'replace')
x = x.strip()
return x if x else None
def multiple_text(sep, x):
if x is None:
return ()
if isinstance(x, bytes):
x = x.decode(preferred_encoding, 'replce')
if isinstance(x, unicode):
x = x.split(sep)
x = (y.strip() for y in x if y.strip())
return (' '.join(y.split()) for y in x if y)
def adapt_datetime(x):
if isinstance(x, (unicode, bytes)):
x = parse_date(x, assume_utc=False, as_utc=False)
return x
def adapt_date(x):
if isinstance(x, (unicode, bytes)):
x = parse_only_date(x)
if x is None:
x = UNDEFINED_DATE
return x
def adapt_number(typ, x):
if x is None:
return None
if isinstance(x, (unicode, bytes)):
if x.lower() == 'none':
return None
return typ(x)
def adapt_bool(x):
if isinstance(x, (unicode, bytes)):
x = x.lower()
if x == 'true':
x = True
elif x == 'false':
x = False
elif x == 'none':
x = None
else:
x = bool(int(x))
return x if x is None else bool(x)
def get_adapter(name, metadata):
dt = metadata['datatype']
if dt == 'text':
if metadata['is_multiple']:
ans = partial(multiple_text, metadata['is_multiple']['ui_to_list'])
else:
ans = single_text
elif dt == 'series':
ans = single_text
elif dt == 'datetime':
ans = adapt_date if name == 'pubdate' else adapt_datetime
elif dt == 'int':
ans = partial(adapt_number, int)
elif dt == 'float':
ans = partial(adapt_number, float)
elif dt == 'bool':
ans = adapt_bool
elif dt == 'comments':
ans = single_text
elif dt == 'rating':
ans = lambda x: x if x is None else min(10., max(0., adapt_number(float, x))),
elif dt == 'enumeration':
ans = single_text
elif dt == 'composite':
ans = lambda x: x
if name == 'title':
return lambda x: ans(x) or _('Unknown')
if name == 'authors':
return lambda x: ans(x) or (_('Unknown'),)
if name in {'timestamp', 'last_modified'}:
return lambda x: ans(x) or UNDEFINED_DATE
return ans
# }}}
def sqlite_datetime(x):
return isoformat(x, sep=' ') if isinstance(x, datetime) else x
def one_one_in_books(book_id_val_map, db, field, *args):
'Set a one-one field in the books table'
if book_id_val_map:
sequence = tuple((sqlite_datetime(v), k) for k, v in book_id_val_map.iteritems())
db.conn.executemany(
'UPDATE books SET %s=? WHERE id=?'%field.metadata['column'], sequence)
field.table.book_col_map.update(book_id_val_map)
return set(book_id_val_map)
def one_one_in_other(book_id_val_map, db, field, *args):
'Set a one-one field in the non-books table, like comments'
deleted = tuple((k,) for k, v in book_id_val_map.iteritems() if v is None)
if deleted:
db.conn.executemany('DELETE FROM %s WHERE book=?'%field.metadata['table'],
deleted)
for book_id in book_id_val_map:
field.table.book_col_map.pop(book_id, None)
updated = {k:v for k, v in book_id_val_map.iteritems() if v is not None}
if updated:
db.conn.executemany('INSERT OR REPLACE INTO %s(book,%s) VALUES (?,?)'%(
field.metadata['table'], field.metadata['column']),
tuple((k, sqlite_datetime(v)) for k, v in updated.iteritems()))
field.table.book_col_map.update(updated)
return set(book_id_val_map)
def dummy(book_id_val_map, *args):
return set()
class Writer(object):
def __init__(self, field):
self.adapter = get_adapter(field.name, field.metadata)
self.name = field.name
self.field = field
dt = field.metadata['datatype']
self.accept_vals = lambda x: True
if dt == 'composite' or field.name in {
'id', 'cover', 'size', 'path', 'formats', 'news'}:
self.set_books_func = dummy
elif field.is_many:
# TODO: Implement this
pass
else:
self.set_books_func = (one_one_in_books if field.metadata['table']
== 'books' else one_one_in_other)
if self.name in {'timestamp', 'uuid'}:
self.accept_vals = bool
def set_books(self, book_id_val_map, db):
book_id_val_map = {k:self.adapter(v) for k, v in
book_id_val_map.iteritems() if self.accept_vals(v)}
if not book_id_val_map:
return set()
dirtied = self.set_books_func(book_id_val_map, db, self.field)
return dirtied

View File

@ -14,7 +14,7 @@ class ILIAD(USBMS):
name = 'IRex Iliad Device Interface' name = 'IRex Iliad Device Interface'
description = _('Communicate with the IRex Iliad eBook reader.') description = _('Communicate with the IRex Iliad eBook reader.')
author = _('John Schember') author = 'John Schember'
supported_platforms = ['windows', 'linux'] supported_platforms = ['windows', 'linux']
# Ordered list of supported formats # Ordered list of supported formats

View File

@ -15,7 +15,7 @@ class IREXDR1000(USBMS):
name = 'IRex Digital Reader 1000 Device Interface' name = 'IRex Digital Reader 1000 Device Interface'
description = _('Communicate with the IRex Digital Reader 1000 eBook ' \ description = _('Communicate with the IRex Digital Reader 1000 eBook ' \
'reader.') 'reader.')
author = _('John Schember') author = 'John Schember'
supported_platforms = ['windows', 'osx', 'linux'] supported_platforms = ['windows', 'osx', 'linux']
# Ordered list of supported formats # Ordered list of supported formats

View File

@ -40,7 +40,7 @@ class USBMS(CLI, Device):
''' '''
description = _('Communicate with an eBook reader.') description = _('Communicate with an eBook reader.')
author = _('John Schember') author = 'John Schember'
supported_platforms = ['windows', 'osx', 'linux'] supported_platforms = ['windows', 'osx', 'linux']
# Store type instances of BookList and Book. We must do this because # Store type instances of BookList and Book. We must do this because

View File

@ -60,7 +60,8 @@ class TOCAdder(object):
else: else:
oeb.guide.remove('toc') oeb.guide.remove('toc')
if not self.has_toc or 'toc' in oeb.guide or opts.no_inline_toc: if (not self.has_toc or 'toc' in oeb.guide or opts.no_inline_toc or
getattr(opts, 'mobi_passthrough', False)):
return return
self.log('\tGenerating in-line ToC') self.log('\tGenerating in-line ToC')

View File

@ -76,7 +76,7 @@ etc.</p>'''),
'''), '''),
'smarten_punctuation': _('''\ 'smarten_punctuation': _('''\
<p>Convert plain text, dashes, ellipsis, multiple hyphens, etc. into their <p>Convert plain text dashes, ellipsis, quotes, multiple hyphens, etc. into their
typographically correct equivalents.</p> typographically correct equivalents.</p>
<p>Note that the algorithm can sometimes generate incorrect results, especially <p>Note that the algorithm can sometimes generate incorrect results, especially
when single quotes at the start of contractions are involved.</p> when single quotes at the start of contractions are involved.</p>

View File

@ -34,7 +34,7 @@ from calibre import isbytestring
from calibre.utils.filenames import (ascii_filename, samefile, from calibre.utils.filenames import (ascii_filename, samefile,
WindowsAtomicFolderMove, hardlink_file) WindowsAtomicFolderMove, hardlink_file)
from calibre.utils.date import (utcnow, now as nowf, utcfromtimestamp, from calibre.utils.date import (utcnow, now as nowf, utcfromtimestamp,
parse_only_date, UNDEFINED_DATE) parse_only_date, UNDEFINED_DATE, parse_date)
from calibre.utils.config import prefs, tweaks, from_json, to_json from calibre.utils.config import prefs, tweaks, from_json, to_json
from calibre.utils.icu import sort_key, strcmp, lower from calibre.utils.icu import sort_key, strcmp, lower
from calibre.utils.search_query_parser import saved_searches, set_saved_searches from calibre.utils.search_query_parser import saved_searches, set_saved_searches
@ -1134,6 +1134,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
base_path = os.path.join(self.library_path, self.path(id, base_path = os.path.join(self.library_path, self.path(id,
index_is_id=True)) index_is_id=True))
self.dirtied([id]) self.dirtied([id])
if not os.path.exists(base_path):
os.makedirs(base_path)
path = os.path.join(base_path, 'cover.jpg') path = os.path.join(base_path, 'cover.jpg')
@ -2565,6 +2567,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
def set_timestamp(self, id, dt, notify=True, commit=True): def set_timestamp(self, id, dt, notify=True, commit=True):
if dt: if dt:
if isinstance(dt, (unicode, bytes)):
dt = parse_date(dt, as_utc=True, assume_utc=False)
self.conn.execute('UPDATE books SET timestamp=? WHERE id=?', (dt, id)) self.conn.execute('UPDATE books SET timestamp=? WHERE id=?', (dt, id))
self.data.set(id, self.FIELD_MAP['timestamp'], dt, row_is_id=True) self.data.set(id, self.FIELD_MAP['timestamp'], dt, row_is_id=True)
self.dirtied([id], commit=False) self.dirtied([id], commit=False)

View File

@ -590,7 +590,7 @@ class BrowseServer(object):
entries = get_category_items(category, entries, entries = get_category_items(category, entries,
self.search_restriction_name, datatype, self.search_restriction_name, datatype,
self.opts.url_prefix) self.opts.url_prefix)
return json.dumps(entries, ensure_ascii=False) return json.dumps(entries, ensure_ascii=True)
@Endpoint() @Endpoint()
@ -772,6 +772,7 @@ class BrowseServer(object):
continue continue
args, fmt, fmts, fname = self.browse_get_book_args(mi, id_) args, fmt, fmts, fname = self.browse_get_book_args(mi, id_)
args['other_formats'] = '' args['other_formats'] = ''
args['fmt'] = fmt
if fmts and fmt: if fmts and fmt:
other_fmts = [x for x in fmts if x.lower() != fmt.lower()] other_fmts = [x for x in fmts if x.lower() != fmt.lower()]
if other_fmts: if other_fmts:
@ -794,8 +795,9 @@ class BrowseServer(object):
args['get_button'] = \ args['get_button'] = \
'<a href="%s" class="read" title="%s">%s</a>' % \ '<a href="%s" class="read" title="%s">%s</a>' % \
(xml(href, True), rt, xml(_('Get'))) (xml(href, True), rt, xml(_('Get')))
args['get_url'] = xml(href, True)
else: else:
args['get_button'] = '' args['get_button'] = args['get_url'] = ''
args['comments'] = comments_to_html(mi.comments) args['comments'] = comments_to_html(mi.comments)
args['stars'] = '' args['stars'] = ''
if mi.rating: if mi.rating:
@ -814,7 +816,7 @@ class BrowseServer(object):
summs.append(self.browse_summary_template.format(**args)) summs.append(self.browse_summary_template.format(**args))
raw = json.dumps('\n'.join(summs), ensure_ascii=False) raw = json.dumps('\n'.join(summs), ensure_ascii=True)
return raw return raw
def browse_render_details(self, id_): def browse_render_details(self, id_):
@ -825,12 +827,17 @@ class BrowseServer(object):
else: else:
args, fmt, fmts, fname = self.browse_get_book_args(mi, id_, args, fmt, fmts, fname = self.browse_get_book_args(mi, id_,
add_category_links=True) add_category_links=True)
args['fmt'] = fmt
if fmt:
args['get_url'] = xml(self.opts.url_prefix + '/get/%s/%s_%d.%s'%(
fmt, fname, id_, fmt), True)
else:
args['get_url'] = ''
args['formats'] = '' args['formats'] = ''
if fmts: if fmts:
ofmts = [u'<a href="{4}/get/{0}/{1}_{2}.{0}" title="{3}">{3}</a>'\ ofmts = [u'<a href="{4}/get/{0}/{1}_{2}.{0}" title="{3}">{3}</a>'\
.format(fmt, fname, id_, fmt.upper(), .format(xfmt, fname, id_, xfmt.upper(),
self.opts.url_prefix) for fmt in self.opts.url_prefix) for xfmt in fmts]
fmts]
ofmts = ', '.join(ofmts) ofmts = ', '.join(ofmts)
args['formats'] = ofmts args['formats'] = ofmts
fields, comments = [], [] fields, comments = [], []
@ -880,9 +887,10 @@ class BrowseServer(object):
c[1]) for c in comments] c[1]) for c in comments]
comments = u'<div class="comments">%s</div>'%('\n\n'.join(comments)) comments = u'<div class="comments">%s</div>'%('\n\n'.join(comments))
return self.browse_details_template.format(id=id_, return self.browse_details_template.format(
title=xml(mi.title, True), fields=fields, id=id_, title=xml(mi.title, True), fields=fields,
formats=args['formats'], comments=comments) get_url=args['get_url'], fmt=args['fmt'],
formats=args['formats'], comments=comments)
@Endpoint(mimetype='application/json; charset=utf-8') @Endpoint(mimetype='application/json; charset=utf-8')
def browse_details(self, id=None): def browse_details(self, id=None):
@ -893,7 +901,7 @@ class BrowseServer(object):
ans = self.browse_render_details(id_) ans = self.browse_render_details(id_)
return json.dumps(ans, ensure_ascii=False) return json.dumps(ans, ensure_ascii=True)
@Endpoint() @Endpoint()
def browse_random(self, *args, **kwargs): def browse_random(self, *args, **kwargs):

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More