More http -> https

This commit is contained in:
Kovid Goyal 2017-02-09 14:02:14 +05:30
parent 4c776ce914
commit 68c3d6322e
7 changed files with 21 additions and 24 deletions

View File

@ -551,7 +551,7 @@ disconnected from the computer, for the changes to the collections to be
recognized. As such, it is unlikely that any calibre developers will ever feel
motivated enough to support it. There is however, a calibre plugin that allows
you to create collections on your Kindle from the calibre metadata. It is
available `from here <http://www.mobileread.com/forums/showthread.php?t=244202>`_.
available `from here <https://www.mobileread.com/forums/showthread.php?t=244202>`_.
.. note::
Amazon have removed the ability to manipulate collections completely
@ -686,7 +686,7 @@ fields. In addition, you can add any columns you like. Columns can be added via
:guilabel:`Preferences->Interface->Add your own columns`. Watch the tutorial
`UI Power tips <https://calibre-ebook.com/demo#tutorials>`_ to learn how to
create your own columns, or read `this blog post
<http://blog.calibre-ebook.com/2011/11/calibre-custom-columns.html>`_.
<https://blog.calibre-ebook.com/2011/11/calibre-custom-columns.html>`_.
You can also create "virtual columns" that contain combinations of the metadata
from other columns. In the add column dialog use the :guilabel:`Quick create`
@ -801,7 +801,7 @@ Even with these tools there is danger of data corruption/loss, so only do this
if you are willing to live with that risk. In particular, be aware that
**Google Drive** is incompatible with calibre, if you put your calibre library in
Google Drive, **you will suffer data loss**. See `this thread
<http://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
<https://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
Miscellaneous

View File

@ -59,8 +59,8 @@ bbc.co.uk
Lets try the following two feeds from *The BBC*:
#. News Front Page: http://newsrss.bbc.co.uk/rss/newsonline_world_edition/front_page/rss.xml
#. Science/Nature: http://newsrss.bbc.co.uk/rss/newsonline_world_edition/science/nature/rss.xml
#. News Front Page: https://newsrss.bbc.co.uk/rss/newsonline_world_edition/front_page/rss.xml
#. Science/Nature: https://newsrss.bbc.co.uk/rss/newsonline_world_edition/science/nature/rss.xml
Follow the procedure outlined in :ref:`calibre_blog` above to create a recipe for *The BBC* (using the feeds above). Looking at the downloaded ebook, we see that calibre has done a creditable job of extracting only the content you care about from each article's webpage. However, the extraction process is not perfect. Sometimes it leaves in undesirable content like menus and navigation aids or it removes content that should have been left alone, like article headings. In order, to have perfect content extraction, we will need to customize the fetch process, as described in the next section.
@ -77,10 +77,10 @@ Using the print version of bbc.co.uk
The first step is to look at the ebook we downloaded previously from :ref:`bbc`. At the end of each article, in the ebook is a little blurb telling you where the article was downloaded from. Copy and paste that URL into a browser. Now on the article webpage look for a link that points to the "Printable version". Click it to see the print version of the article. It looks much neater! Now compare the two URLs. For me they were:
Article URL
http://news.bbc.co.uk/2/hi/science/nature/7312016.stm
https://news.bbc.co.uk/2/hi/science/nature/7312016.stm
Print version URL
http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/2/hi/science/nature/7312016.stm
https://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/2/hi/science/nature/7312016.stm
So it looks like to get the print version, we need to prefix every article URL with:
@ -96,7 +96,7 @@ You can see that the fields from the :guilabel:`Basic mode` have been translated
.. code-block:: python
def print_version(self, url):
return url.replace('http://', 'http://newsvote.bbc.co.uk/mpapps/pagetools/print/')
return url.replace('https://', 'https://newsvote.bbc.co.uk/mpapps/pagetools/print/')
This is python, so indentation is important. After you've added the lines, it should look like:
@ -179,7 +179,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
def get_browser(self):
br = BasicNewsRecipe.get_browser()
if self.username is not None and self.password is not None:
br.open('http://www.nytimes.com/auth/login')
br.open('https://www.nytimes.com/auth/login')
br.select_form(name='login')
br['USERID'] = self.username
br['PASSWORD'] = self.password
@ -187,7 +187,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
return br
def parse_index(self):
soup = self.index_to_soup('http://www.nytimes.com/pages/todayspaper/index.html')
soup = self.index_to_soup('https://www.nytimes.com/pages/todayspaper/index.html')
def feed_title(div):
return ''.join(div.findAll(text=True, recursive=False)).strip()
@ -233,7 +233,7 @@ A reasonably complex real life example that exposes more of the :term:`API` of `
if refresh is None:
return soup
content = refresh.get('content').partition('=')[2]
raw = self.browser.open('http://www.nytimes.com'+content).read()
raw = self.browser.open('https://www.nytimes.com'+content).read()
return BeautifulSoup(raw.decode('cp1252', 'replace'))
@ -263,7 +263,7 @@ The next interesting feature is::
The next new feature is the
:meth:`calibre.web.feeds.news.BasicNewsRecipe.parse_index` method. Its job is
to go to http://www.nytimes.com/pages/todayspaper/index.html and fetch the list
to go to https://www.nytimes.com/pages/todayspaper/index.html and fetch the list
of articles that appear in *todays* paper. While more complex than simply using
:term:`RSS`, the recipe creates an ebook that corresponds very closely to the
days paper. ``parse_index`` makes heavy use of `BeautifulSoup

View File

@ -19,7 +19,7 @@ class Economist(BasicNewsRecipe):
language = 'en'
__author__ = "Kovid Goyal"
INDEX = 'http://www.economist.com/printedition'
INDEX = 'https://www.economist.com/printedition'
description = ('Global news and current affairs from a European'
' perspective. Best downloaded on Friday mornings (GMT)')
extra_css = '''
@ -82,7 +82,7 @@ class Economist(BasicNewsRecipe):
def economist_parse_index(self):
# return [('Articles', [{'title':'test',
# 'url':'http://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
# 'url':'https://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
soup = self.index_to_soup(self.INDEX)
div = soup.find('div', attrs={'class': 'issue-image'})
if div is not None:
@ -110,7 +110,7 @@ class Economist(BasicNewsRecipe):
if a is not None:
url = a['href']
if url.startswith('/'):
url = 'http://www.economist.com' + url
url = 'https://www.economist.com' + url
url += '/print'
title = self.tag_to_string(a)
if title:

View File

@ -19,7 +19,7 @@ class Economist(BasicNewsRecipe):
language = 'en'
__author__ = "Kovid Goyal"
INDEX = 'http://www.economist.com/printedition'
INDEX = 'https://www.economist.com/printedition'
description = ('Global news and current affairs from a European'
' perspective. Best downloaded on Friday mornings (GMT)')
extra_css = '''
@ -82,7 +82,7 @@ class Economist(BasicNewsRecipe):
def economist_parse_index(self):
# return [('Articles', [{'title':'test',
# 'url':'http://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
# 'url':'https://www.economist.com/news/americas/21699494-guide-cutting-corners-way-jos'}])]
soup = self.index_to_soup(self.INDEX)
div = soup.find('div', attrs={'class': 'issue-image'})
if div is not None:
@ -110,7 +110,7 @@ class Economist(BasicNewsRecipe):
if a is not None:
url = a['href']
if url.startswith('/'):
url = 'http://www.economist.com' + url
url = 'https://www.economist.com' + url
url += '/print'
title = self.tag_to_string(a)
if title:

View File

@ -409,7 +409,7 @@ horizontal_scrolling_per_column = True
# calibre in English but want sorting to work in the language where you live.
# Set the tweak to the desired ISO 639-1 language code, in lower case.
# You can find the list of supported locales at
# http://publib.boulder.ibm.com/infocenter/iseries/v5r3/topic/nls/rbagsicusortsequencetables.htm
# https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
# Default: locale_for_sorting = '' -- use the language calibre displays in
# Example: locale_for_sorting = 'fr' -- sort using French rules.
# Example: locale_for_sorting = 'nb' -- sort using Norwegian rules.

View File

@ -89,7 +89,7 @@ class RelaySetup(QDialog):
self.tl = QLabel(('<p>'+_('Setup sending email using') +
' <b>{name}</b><p>' +
_('If you don\'t have an account, you can sign up for a free {name} email '
'account at <a href="http://{url}">http://{url}</a>. {extra}')).format(
'account at <a href="https://{url}">https://{url}</a>. {extra}')).format(
**service))
l.addWidget(self.tl, 0, 0, 3, 0)
self.tl.setWordWrap(True)
@ -289,6 +289,3 @@ class SendEmail(QWidget, Ui_Form):
conf.set('relay_password', hexlify(password.encode('utf-8')))
conf.set('encryption', enc_method)
return True

View File

@ -562,7 +562,7 @@ class RecursiveFetcher(object):
return res
def option_parser(usage=_('%prog URL\n\nWhere URL is for example http://google.com')):
def option_parser(usage=_('%prog URL\n\nWhere URL is for example https://google.com')):
parser = OptionParser(usage=usage)
parser.add_option('-d', '--base-dir',
help=_('Base directory into which URL is saved. Default is %default'),