mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
More http -> https
This commit is contained in:
parent
4c776ce914
commit
68c3d6322e
@ -551,7 +551,7 @@ disconnected from the computer, for the changes to the collections to be
|
|||||||
recognized. As such, it is unlikely that any calibre developers will ever feel
|
recognized. As such, it is unlikely that any calibre developers will ever feel
|
||||||
motivated enough to support it. There is however, a calibre plugin that allows
|
motivated enough to support it. There is however, a calibre plugin that allows
|
||||||
you to create collections on your Kindle from the calibre metadata. It is
|
you to create collections on your Kindle from the calibre metadata. It is
|
||||||
available `from here <http://www.mobileread.com/forums/showthread.php?t=244202>`_.
|
available `from here <https://www.mobileread.com/forums/showthread.php?t=244202>`_.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Amazon have removed the ability to manipulate collections completely
|
Amazon have removed the ability to manipulate collections completely
|
||||||
@ -686,7 +686,7 @@ fields. In addition, you can add any columns you like. Columns can be added via
|
|||||||
:guilabel:`Preferences->Interface->Add your own columns`. Watch the tutorial
|
:guilabel:`Preferences->Interface->Add your own columns`. Watch the tutorial
|
||||||
`UI Power tips <https://calibre-ebook.com/demo#tutorials>`_ to learn how to
|
`UI Power tips <https://calibre-ebook.com/demo#tutorials>`_ to learn how to
|
||||||
create your own columns, or read `this blog post
|
create your own columns, or read `this blog post
|
||||||
<http://blog.calibre-ebook.com/2011/11/calibre-custom-columns.html>`_.
|
<https://blog.calibre-ebook.com/2011/11/calibre-custom-columns.html>`_.
|
||||||
|
|
||||||
You can also create "virtual columns" that contain combinations of the metadata
|
You can also create "virtual columns" that contain combinations of the metadata
|
||||||
from other columns. In the add column dialog use the :guilabel:`Quick create`
|
from other columns. In the add column dialog use the :guilabel:`Quick create`
|
||||||
@ -801,7 +801,7 @@ Even with these tools there is danger of data corruption/loss, so only do this
|
|||||||
if you are willing to live with that risk. In particular, be aware that
|
if you are willing to live with that risk. In particular, be aware that
|
||||||
**Google Drive** is incompatible with calibre, if you put your calibre library in
|
**Google Drive** is incompatible with calibre, if you put your calibre library in
|
||||||
Google Drive, **you will suffer data loss**. See `this thread
|
Google Drive, **you will suffer data loss**. See `this thread
|
||||||
<http://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
|
<https://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
|
||||||
|
|
||||||
|
|
||||||
Miscellaneous
|
Miscellaneous
|
||||||
|
@ -59,8 +59,8 @@ bbc.co.uk
|
|||||||
|
|
||||||
Lets try the following two feeds from *The BBC*:
|
Lets try the following two feeds from *The BBC*:
|
||||||
|
|
||||||
#. News Front Page: http://newsrss.bbc.co.uk/rss/newsonline_world_edition/front_page/rss.xml
|
#. News Front Page: https://newsrss.bbc.co.uk/rss/newsonline_world_edition/front_page/rss.xml
|
||||||
#. Science/Nature: http://newsrss.bbc.co.uk/rss/newsonline_world_edition/science/nature/rss.xml
|
#. Science/Nature: https://newsrss.bbc.co.uk/rss/newsonline_world_edition/science/nature/rss.xml
|
||||||
|
|
||||||
Follow the procedure outlined in :ref:`calibre_blog` above to create a recipe for *The BBC* (using the feeds above). Looking at the downloaded ebook, we see that calibre has done a creditable job of extracting only the content you care about from each article's webpage. However, the extraction process is not perfect. Sometimes it leaves in undesirable content like menus and navigation aids or it removes content that should have been left alone, like article headings. In order, to have perfect content extraction, we will need to customize the fetch process, as described in the next section.
|
Follow the procedure outlined in :ref:`calibre_blog` above to create a recipe for *The BBC* (using the feeds above). Looking at the downloaded ebook, we see that calibre has done a creditable job of extracting only the content you care about from each article's webpage. However, the extraction process is not perfect. Sometimes it leaves in undesirable content like menus and navigation aids or it removes content that should have been left alone, like article headings. In order, to have perfect content extraction, we will need to customize the fetch process, as described in the next section.
|
||||||
|
|
||||||
@ -77,10 +77,10 @@ Using the print version of bbc.co.uk
|
|||||||
The first step is to look at the ebook we downloaded previously from :ref:`bbc`. At the end of each article, in the ebook is a little blurb telling you where the article was downloaded from. Copy and paste that URL into a browser. Now on the article webpage look for a link that points to the "Printable version". Click it to see the print version of the article. It looks much neater! Now compare the two URLs. For me they were:
|
The first step is to look at the ebook we downloaded previously from :ref:`bbc`. At the end of each article, in the ebook is a little blurb telling you where the article was downloaded from. Copy and paste that URL into a browser. Now on the article webpage look for a link that points to the "Printable version". Click it to see the print version of the article. It looks much neater! Now compare the two URLs. For me they were:
|
||||||
|
|
||||||
Article URL
|
Article URL
|
||||||
http://news.bbc.co.uk/2/hi/science/nature/7312016.stm
|
https://news.bbc.co.uk/2/hi/science/nature/7312016.stm
|
||||||
|
|
||||||
Print version URL
|
Print version URL
|
||||||
http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/2/hi/science/nature/7312016.stm
|
https://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/2/hi/science/nature/7312016.stm
|
||||||
|
|
||||||
So it looks like to get the print version, we need to prefix every article URL with:
|
So it looks like to get the print version, we need to prefix every article URL with:
|
||||||
|
|
||||||
@ -96,7 +96,7 @@ You can see that the fields from the :guilabel:`Basic mode` have been translated
|
|||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
def print_version(self, url):
|
def print_version(self, url):
|
||||||
return url.replace('http://', 'http://newsvote.bbc.co.uk/mpapps/pagetools/print/')
|
return url.replace('https://', 'https://newsvote.bbc.co.uk/mpapps/pagetools/print/')
|
||||||
|
|
||||||
This is python, so indentation is important. After you've added the lines, it should look like:
|
This is python, so indentation is important. After you've added the lines, it should look like:
|
||||||
|
|
||||||
@ -179,7 +179,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
|
|||||||
def get_browser(self):
|
def get_browser(self):
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
if self.username is not None and self.password is not None:
|
if self.username is not None and self.password is not None:
|
||||||
br.open('http://www.nytimes.com/auth/login')
|
br.open('https://www.nytimes.com/auth/login')
|
||||||
br.select_form(name='login')
|
br.select_form(name='login')
|
||||||
br['USERID'] = self.username
|
br['USERID'] = self.username
|
||||||
br['PASSWORD'] = self.password
|
br['PASSWORD'] = self.password
|
||||||
@ -187,7 +187,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
|
|||||||
return br
|
return br
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
soup = self.index_to_soup('http://www.nytimes.com/pages/todayspaper/index.html')
|
soup = self.index_to_soup('https://www.nytimes.com/pages/todayspaper/index.html')
|
||||||
|
|
||||||
def feed_title(div):
|
def feed_title(div):
|
||||||
return ''.join(div.findAll(text=True, recursive=False)).strip()
|
return ''.join(div.findAll(text=True, recursive=False)).strip()
|
||||||
@ -233,7 +233,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
|
|||||||
if refresh is None:
|
if refresh is None:
|
||||||
return soup
|
return soup
|
||||||
content = refresh.get('content').partition('=')[2]
|
content = refresh.get('content').partition('=')[2]
|
||||||
raw = self.browser.open('http://www.nytimes.com'+content).read()
|
raw = self.browser.open('https://www.nytimes.com'+content).read()
|
||||||
return BeautifulSoup(raw.decode('cp1252', 'replace'))
|
return BeautifulSoup(raw.decode('cp1252', 'replace'))
|
||||||
|
|
||||||
|
|
||||||
@ -263,7 +263,7 @@ The next interesting feature is::
|
|||||||
|
|
||||||
The next new feature is the
|
The next new feature is the
|
||||||
:meth:`calibre.web.feeds.news.BasicNewsRecipe.parse_index` method. Its job is
|
:meth:`calibre.web.feeds.news.BasicNewsRecipe.parse_index` method. Its job is
|
||||||
to go to http://www.nytimes.com/pages/todayspaper/index.html and fetch the list
|
to go to https://www.nytimes.com/pages/todayspaper/index.html and fetch the list
|
||||||
of articles that appear in *todays* paper. While more complex than simply using
|
of articles that appear in *todays* paper. While more complex than simply using
|
||||||
:term:`RSS`, the recipe creates an ebook that corresponds very closely to the
|
:term:`RSS`, the recipe creates an ebook that corresponds very closely to the
|
||||||
days paper. ``parse_index`` makes heavy use of `BeautifulSoup
|
days paper. ``parse_index`` makes heavy use of `BeautifulSoup
|
||||||
|
@ -19,7 +19,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
language = 'en'
|
language = 'en'
|
||||||
|
|
||||||
__author__ = "Kovid Goyal"
|
__author__ = "Kovid Goyal"
|
||||||
INDEX = 'http://www.economist.com/printedition'
|
INDEX = 'https://www.economist.com/printedition'
|
||||||
description = ('Global news and current affairs from a European'
|
description = ('Global news and current affairs from a European'
|
||||||
' perspective. Best downloaded on Friday mornings (GMT)')
|
' perspective. Best downloaded on Friday mornings (GMT)')
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
@ -82,7 +82,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
|
|
||||||
def economist_parse_index(self):
|
def economist_parse_index(self):
|
||||||
# return [('Articles', [{'title':'test',
|
# return [('Articles', [{'title':'test',
|
||||||
# 'url':'http://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
|
# 'url':'https://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
|
||||||
soup = self.index_to_soup(self.INDEX)
|
soup = self.index_to_soup(self.INDEX)
|
||||||
div = soup.find('div', attrs={'class': 'issue-image'})
|
div = soup.find('div', attrs={'class': 'issue-image'})
|
||||||
if div is not None:
|
if div is not None:
|
||||||
@ -110,7 +110,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if a is not None:
|
if a is not None:
|
||||||
url = a['href']
|
url = a['href']
|
||||||
if url.startswith('/'):
|
if url.startswith('/'):
|
||||||
url = 'http://www.economist.com' + url
|
url = 'https://www.economist.com' + url
|
||||||
url += '/print'
|
url += '/print'
|
||||||
title = self.tag_to_string(a)
|
title = self.tag_to_string(a)
|
||||||
if title:
|
if title:
|
||||||
|
@ -19,7 +19,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
language = 'en'
|
language = 'en'
|
||||||
|
|
||||||
__author__ = "Kovid Goyal"
|
__author__ = "Kovid Goyal"
|
||||||
INDEX = 'http://www.economist.com/printedition'
|
INDEX = 'https://www.economist.com/printedition'
|
||||||
description = ('Global news and current affairs from a European'
|
description = ('Global news and current affairs from a European'
|
||||||
' perspective. Best downloaded on Friday mornings (GMT)')
|
' perspective. Best downloaded on Friday mornings (GMT)')
|
||||||
extra_css = '''
|
extra_css = '''
|
||||||
@ -82,7 +82,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
|
|
||||||
def economist_parse_index(self):
|
def economist_parse_index(self):
|
||||||
# return [('Articles', [{'title':'test',
|
# return [('Articles', [{'title':'test',
|
||||||
# 'url':'http://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
|
# 'url':'https://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
|
||||||
soup = self.index_to_soup(self.INDEX)
|
soup = self.index_to_soup(self.INDEX)
|
||||||
div = soup.find('div', attrs={'class': 'issue-image'})
|
div = soup.find('div', attrs={'class': 'issue-image'})
|
||||||
if div is not None:
|
if div is not None:
|
||||||
@ -110,7 +110,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if a is not None:
|
if a is not None:
|
||||||
url = a['href']
|
url = a['href']
|
||||||
if url.startswith('/'):
|
if url.startswith('/'):
|
||||||
url = 'http://www.economist.com' + url
|
url = 'https://www.economist.com' + url
|
||||||
url += '/print'
|
url += '/print'
|
||||||
title = self.tag_to_string(a)
|
title = self.tag_to_string(a)
|
||||||
if title:
|
if title:
|
||||||
|
@ -409,7 +409,7 @@ horizontal_scrolling_per_column = True
|
|||||||
# calibre in English but want sorting to work in the language where you live.
|
# calibre in English but want sorting to work in the language where you live.
|
||||||
# Set the tweak to the desired ISO 639-1 language code, in lower case.
|
# Set the tweak to the desired ISO 639-1 language code, in lower case.
|
||||||
# You can find the list of supported locales at
|
# You can find the list of supported locales at
|
||||||
# http://publib.boulder.ibm.com/infocenter/iseries/v5r3/topic/nls/rbagsicusortsequencetables.htm
|
# https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
|
||||||
# Default: locale_for_sorting = '' -- use the language calibre displays in
|
# Default: locale_for_sorting = '' -- use the language calibre displays in
|
||||||
# Example: locale_for_sorting = 'fr' -- sort using French rules.
|
# Example: locale_for_sorting = 'fr' -- sort using French rules.
|
||||||
# Example: locale_for_sorting = 'nb' -- sort using Norwegian rules.
|
# Example: locale_for_sorting = 'nb' -- sort using Norwegian rules.
|
||||||
|
@ -89,7 +89,7 @@ class RelaySetup(QDialog):
|
|||||||
self.tl = QLabel(('<p>'+_('Setup sending email using') +
|
self.tl = QLabel(('<p>'+_('Setup sending email using') +
|
||||||
' <b>{name}</b><p>' +
|
' <b>{name}</b><p>' +
|
||||||
_('If you don\'t have an account, you can sign up for a free {name} email '
|
_('If you don\'t have an account, you can sign up for a free {name} email '
|
||||||
'account at <a href="http://{url}">http://{url}</a>. {extra}')).format(
|
'account at <a href="https://{url}">https://{url}</a>. {extra}')).format(
|
||||||
**service))
|
**service))
|
||||||
l.addWidget(self.tl, 0, 0, 3, 0)
|
l.addWidget(self.tl, 0, 0, 3, 0)
|
||||||
self.tl.setWordWrap(True)
|
self.tl.setWordWrap(True)
|
||||||
@ -289,6 +289,3 @@ class SendEmail(QWidget, Ui_Form):
|
|||||||
conf.set('relay_password', hexlify(password.encode('utf-8')))
|
conf.set('relay_password', hexlify(password.encode('utf-8')))
|
||||||
conf.set('encryption', enc_method)
|
conf.set('encryption', enc_method)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -562,7 +562,7 @@ class RecursiveFetcher(object):
|
|||||||
return res
|
return res
|
||||||
|
|
||||||
|
|
||||||
def option_parser(usage=_('%prog URL\n\nWhere URL is for example http://google.com')):
|
def option_parser(usage=_('%prog URL\n\nWhere URL is for example https://google.com')):
|
||||||
parser = OptionParser(usage=usage)
|
parser = OptionParser(usage=usage)
|
||||||
parser.add_option('-d', '--base-dir',
|
parser.add_option('-d', '--base-dir',
|
||||||
help=_('Base directory into which URL is saved. Default is %default'),
|
help=_('Base directory into which URL is saved. Default is %default'),
|
||||||
|
Loading…
x
Reference in New Issue
Block a user