513
votes

Here's my code:

import urllib2.request

response = urllib2.urlopen("http://www.google.com")
html = response.read()
print(html)

Any help?

9
I see you edited your answer again, so I edited my answer again to respond: your current problem is that you're saying urllib.urlopen("http://www.google.com/") instead of just urlopen("http://www.google.com/")Eli Courtwright

9 Answers

710
votes

As stated in the urllib2 documentation:

The urllib2 module has been split across several modules in Python 3 named urllib.request and urllib.error. The 2to3 tool will automatically adapt imports when converting your sources to Python 3.

So you should instead be saying

from urllib.request import urlopen
html = urlopen("http://www.google.com/").read()
print(html)

Your current, now-edited code sample is incorrect because you are saying urllib.urlopen("http://www.google.com/") instead of just urlopen("http://www.google.com/").

111
votes

For a script working with Python 2 (tested versions 2.7.3 and 2.6.8) and Python 3 (3.2.3 and 3.3.2+) try:

#! /usr/bin/env python

try:
    # For Python 3.0 and later
    from urllib.request import urlopen
except ImportError:
    # Fall back to Python 2's urllib2
    from urllib2 import urlopen

html = urlopen("http://www.google.com/")
print(html.read())
70
votes

The above didn't work for me in 3.3. Try this instead (YMMV, etc)

import urllib.request
url = "http://www.google.com/"
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
print (response.read().decode('utf-8'))
27
votes

Some tab completions to show the contents of the packages in Python 2 vs Python 3.

In Python 2:

In [1]: import urllib

In [2]: urllib.
urllib.ContentTooShortError      urllib.ftpwrapper                urllib.socket                    urllib.test1
urllib.FancyURLopener            urllib.getproxies                urllib.splitattr                 urllib.thishost
urllib.MAXFTPCACHE               urllib.getproxies_environment    urllib.splithost                 urllib.time
urllib.URLopener                 urllib.i                         urllib.splitnport                urllib.toBytes
urllib.addbase                   urllib.localhost                 urllib.splitpasswd               urllib.unquote
urllib.addclosehook              urllib.noheaders                 urllib.splitport                 urllib.unquote_plus
urllib.addinfo                   urllib.os                        urllib.splitquery                urllib.unwrap
urllib.addinfourl                urllib.pathname2url              urllib.splittag                  urllib.url2pathname
urllib.always_safe               urllib.proxy_bypass              urllib.splittype                 urllib.urlcleanup
urllib.base64                    urllib.proxy_bypass_environment  urllib.splituser                 urllib.urlencode
urllib.basejoin                  urllib.quote                     urllib.splitvalue                urllib.urlopen
urllib.c                         urllib.quote_plus                urllib.ssl                       urllib.urlretrieve
urllib.ftpcache                  urllib.re                        urllib.string                    
urllib.ftperrors                 urllib.reporthook                urllib.sys  

In Python 3:

In [2]: import urllib.
urllib.error        urllib.parse        urllib.request      urllib.response     urllib.robotparser

In [2]: import urllib.error.
urllib.error.ContentTooShortError  urllib.error.HTTPError             urllib.error.URLError

In [2]: import urllib.parse.
urllib.parse.parse_qs          urllib.parse.quote_plus        urllib.parse.urldefrag         urllib.parse.urlsplit
urllib.parse.parse_qsl         urllib.parse.unquote           urllib.parse.urlencode         urllib.parse.urlunparse
urllib.parse.quote             urllib.parse.unquote_plus      urllib.parse.urljoin           urllib.parse.urlunsplit
urllib.parse.quote_from_bytes  urllib.parse.unquote_to_bytes  urllib.parse.urlparse

In [2]: import urllib.request.
urllib.request.AbstractBasicAuthHandler         urllib.request.HTTPSHandler
urllib.request.AbstractDigestAuthHandler        urllib.request.OpenerDirector
urllib.request.BaseHandler                      urllib.request.ProxyBasicAuthHandler
urllib.request.CacheFTPHandler                  urllib.request.ProxyDigestAuthHandler
urllib.request.DataHandler                      urllib.request.ProxyHandler
urllib.request.FTPHandler                       urllib.request.Request
urllib.request.FancyURLopener                   urllib.request.URLopener
urllib.request.FileHandler                      urllib.request.UnknownHandler
urllib.request.HTTPBasicAuthHandler             urllib.request.build_opener
urllib.request.HTTPCookieProcessor              urllib.request.getproxies
urllib.request.HTTPDefaultErrorHandler          urllib.request.install_opener
urllib.request.HTTPDigestAuthHandler            urllib.request.pathname2url
urllib.request.HTTPErrorProcessor               urllib.request.url2pathname
urllib.request.HTTPHandler                      urllib.request.urlcleanup
urllib.request.HTTPPasswordMgr                  urllib.request.urlopen
urllib.request.HTTPPasswordMgrWithDefaultRealm  urllib.request.urlretrieve
urllib.request.HTTPRedirectHandler     


In [2]: import urllib.response.
urllib.response.addbase       urllib.response.addclosehook  urllib.response.addinfo       urllib.response.addinfourl
23
votes

Python 3:

import urllib.request

wp = urllib.request.urlopen("http://google.com")
pw = wp.read()
print(pw)

Python 2:

import urllib
import sys

wp = urllib.urlopen("http://google.com")
for line in wp:
    sys.stdout.write(line)

While I have tested both the Codes in respective versions.

11
votes

Simplest of all solutions:

In Python 3.x:

import urllib.request
url = "https://api.github.com/users?since=100"
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
data_content = response.read()
print(data_content)
7
votes

In python 3, to get text output:

import io
import urllib.request

response = urllib.request.urlopen("http://google.com")
text = io.TextIOWrapper(response)
7
votes

NOTE: urllib2 is no longer available in Python 3

You can try following code.

import urllib.request 
res = urllib.request.urlopen('url')
output = res.read()
print(output)

You can get more idea about urllib.request from this link.

Using :urllib3

import urllib3
http = urllib3.PoolManager()
r = http.request('GET', 'url')
print(r.status)
print( r.headers)
print(r.data)

Also if you want more details about urllib3. follow this link.

6
votes

That worked for me in python3:

import urllib.request
htmlfile = urllib.request.urlopen("http://google.com")
htmltext = htmlfile.read()
print(htmltext)