You can work with the web crawler using urllib, it can be more durable and has no problem obtaining the HTTPS pages, Bellow, there's a usage example of that lib:
link = 'https://www.intellipaat.com'
html = urllib.urlopen(link).read()
print(html)
3 lines are all you need to grab the HTML from a page.
I also recommend you use regex on the HTML to grab other links, an example for that (using re library) would be:
for url in re.findall(r'<a[^>]+href=["\'](.[^"\']+)["\']', html, re.I): # Searches the HTML for other URLs
link = url.split("#", 1)[0] \
if url.startswith("http") \
else '{uri.scheme}://{uri.netloc}'.format(uri=urlparse.urlparse(origLink)) + url.split("#", 1)[0] # Checks if the HTML is valid and format it
Join this Python Training course now if you want to gain more knowledge in Python.