266
votes

SciPy appears to provide most (but not all [1]) of NumPy's functions in its own namespace. In other words, if there's a function named numpy.foo, there's almost certainly a scipy.foo. Most of the time, the two appear to be exactly the same, oftentimes even pointing to the same function object.

Sometimes, they're different. To give an example that came up recently:

  • numpy.log10 is a ufunc that returns NaNs for negative arguments;
  • scipy.log10 returns complex values for negative arguments and doesn't appear to be a ufunc.

The same can be said about log, log2 and logn, but not about log1p [2].

On the other hand, numpy.exp and scipy.exp appear to be different names for the same ufunc. This is also true of scipy.log1p and numpy.log1p.

Another example is numpy.linalg.solve vs scipy.linalg.solve. They're similar, but the latter offers some additional features over the former.

Why the apparent duplication? If this is meant to be a wholesale import of numpy into the scipy namespace, why the subtle differences in behaviour and the missing functions? Is there some overarching logic that would help clear up the confusion?

[1] numpy.min, numpy.max, numpy.abs and a few others have no counterparts in the scipy namespace.

[2] Tested using NumPy 1.5.1 and SciPy 0.9.0rc2.

8
I read in the answers that all of those functions are available without additionally importing Numpy because the intention is for users not to have to know the distinction between the scipy and numpy namespaces. Now I wonder, because I follow the posts about numpy and scipy a bit and use it myself. And I almost always see numpy being imported seperately (as np). So they failed?joris
there's some differences between scipy and numpy is in FFT stuff, i once got bitten by an issue that eventually tracked down to scipy and numpy's version of rfft defined differentlywim
The FFTs of SciPy and NumPy are different. SciPy uses the Fortran library FFTPACK, hence the name scipy.fftpack. NumPy uses a C library called fftpack_lite; it has fewer functions and only supports double precision in NumPy. Enthought inc. has patched their numpy.fft to use Intel MKL for FFTs instead of fftpack_lite.Sturla Molden
NumPy was originally named scipy.core. NumPy and SciPy are closely related projects. The main reason for the separation is to ensure that the array library (NumPy) is lean and mean, as the bulk of SciPy is not always needed. Also, there was as decision among scientists to retire the array packages numeric (MIT) and numarray (NASA) in favor of scipy.core, and thus it got the name NumPy. SciPy has still not reached 1.0, whereas NumPy is currently released as 1.8.1. NumPy has some facilities for FFT and linear algebra, but not as extensive as SciPy.Sturla Molden
@SturlaMolden good to know about Enthought, do you know if Anaconda optimizes both or just numpy?dashesy

8 Answers

145
votes

Last time I checked it, the scipy __init__ method executes a

from numpy import *

so that the whole numpy namespace is included into scipy when the scipy module is imported.

The log10 behavior you are describing is interesting, because both versions are coming from numpy. One is a ufunc, the other is a numpy.lib function. Why scipy is preferring the library function over the ufunc, I don't know off the top of my head.


EDIT: In fact, I can answer the log10 question. Looking in the scipy __init__ method I see this:

# Import numpy symbols to scipy name space
import numpy as _num
from numpy import oldnumeric
from numpy import *
from numpy.random import rand, randn
from numpy.fft import fft, ifft
from numpy.lib.scimath import *

The log10 function you get in scipy comes from numpy.lib.scimath. Looking at that code, it says:

"""
Wrapper functions to more user-friendly calling of certain math functions
whose output data-type is different than the input data-type in certain
domains of the input.

For example, for functions like log() with branch cuts, the versions in this
module provide the mathematically valid answers in the complex plane:

>>> import math
>>> from numpy.lib import scimath
>>> scimath.log(-math.exp(1)) == (1+1j*math.pi)
True

Similarly, sqrt(), other base logarithms, power() and trig functions are
correctly handled.  See their respective docstrings for specific examples.
"""

It seems that module overlays the base numpy ufuncs for sqrt, log, log2, logn, log10, power, arccos, arcsin, and arctanh. That explains the behavior you are seeing. The underlying design reason why it is done like that is probably buried in a mailing list post somewhere.

56
votes

From the SciPy Reference Guide:

... all of the Numpy functions have been subsumed into the scipy namespace so that all of those functions are available without additionally importing Numpy.

The intention is for users not to have to know the distinction between the scipy and numpy namespaces, though apparently you've found an exception.

54
votes

It seems from the SciPy FAQ that some functions from NumPy are here for historical reasons while it should only be in SciPy:

What is the difference between NumPy and SciPy?

In an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.

That explains why scipy.linalg.solve offers some additional features over numpy.linalg.solve.

I did not see the answer of SethMMorton to the related question

13
votes

There is a short comment at the end of the introduction to SciPy documentation:

Another useful command issource. When given a function written in Python as an argument, it prints out a listing of the source code for that function. This can be helpful in learning about an algorithm or understanding exactly what a function is doing with its arguments. Also don’t forget about the Python command dir which can be used to look at the namespace of a module or package.

I think this will allow someone with enough knowledge of all the packages involved to pick apart exactly what the differences are between some scipy and numpy functions (it didn't help me with the log10 question at all). I definitely don't have that knowledge but source does indicate that scipy.linalg.solve and numpy.linalg.solve interact with lapack in different ways;

Python 2.4.3 (#1, May  5 2011, 18:44:23) 
[GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2
>>> import scipy
>>> import scipy.linalg
>>> import numpy
>>> scipy.source(scipy.linalg.solve)
In file: /usr/lib64/python2.4/site-packages/scipy/linalg/basic.py

def solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0,
          debug = 0):
    """ solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0) -> x

    Solve a linear system of equations a * x = b for x.

    Inputs:

      a -- An N x N matrix.
      b -- An N x nrhs matrix or N vector.
      sym_pos -- Assume a is symmetric and positive definite.
      lower -- Assume a is lower triangular, otherwise upper one.
               Only used if sym_pos is true.
      overwrite_y - Discard data in y, where y is a or b.

    Outputs:

      x -- The solution to the system a * x = b
    """
    a1, b1 = map(asarray_chkfinite,(a,b))
    if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]:
        raise ValueError, 'expected square matrix'
    if a1.shape[0] != b1.shape[0]:
        raise ValueError, 'incompatible dimensions'
    overwrite_a = overwrite_a or (a1 is not a and not hasattr(a,'__array__'))
    overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__'))
    if debug:
        print 'solve:overwrite_a=',overwrite_a
        print 'solve:overwrite_b=',overwrite_b
    if sym_pos:
        posv, = get_lapack_funcs(('posv',),(a1,b1))
        c,x,info = posv(a1,b1,
                        lower = lower,
                        overwrite_a=overwrite_a,
                        overwrite_b=overwrite_b)
    else:
        gesv, = get_lapack_funcs(('gesv',),(a1,b1))
        lu,piv,x,info = gesv(a1,b1,
                             overwrite_a=overwrite_a,
                             overwrite_b=overwrite_b)

    if info==0:
        return x
    if info>0:
        raise LinAlgError, "singular matrix"
    raise ValueError,\
          'illegal value in %-th argument of internal gesv|posv'%(-info)

>>> scipy.source(numpy.linalg.solve)
In file: /usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py

def solve(a, b):
    """
    Solve the equation ``a x = b`` for ``x``.

    Parameters
    ----------
    a : array_like, shape (M, M)
        Input equation coefficients.
    b : array_like, shape (M,)
        Equation target values.

    Returns
    -------
    x : array, shape (M,)

    Raises
    ------
    LinAlgError
        If `a` is singular or not square.

    Examples
    --------
    Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``:

    >>> a = np.array([[3,1], [1,2]])
    >>> b = np.array([9,8])
    >>> x = np.linalg.solve(a, b)
    >>> x
    array([ 2.,  3.])

    Check that the solution is correct:

    >>> (np.dot(a, x) == b).all()
    True

    """
    a, _ = _makearray(a)
    b, wrap = _makearray(b)
    one_eq = len(b.shape) == 1
    if one_eq:
        b = b[:, newaxis]
    _assertRank2(a, b)
    _assertSquareness(a)
    n_eq = a.shape[0]
    n_rhs = b.shape[1]
    if n_eq != b.shape[0]:
        raise LinAlgError, 'Incompatible dimensions'
    t, result_t = _commonType(a, b)
#    lapack_routine = _findLapackRoutine('gesv', t)
    if isComplexType(t):
        lapack_routine = lapack_lite.zgesv
    else:
        lapack_routine = lapack_lite.dgesv
    a, b = _fastCopyAndTranspose(t, a, b)
    pivots = zeros(n_eq, fortran_int)
    results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
    if results['info'] > 0:
        raise LinAlgError, 'Singular matrix'
    if one_eq:
        return wrap(b.ravel().astype(result_t))
    else:
        return wrap(b.transpose().astype(result_t))

This is also my first post so if I should change something here please let me know.

8
votes

From Wikipedia ( http://en.wikipedia.org/wiki/NumPy#History ):

The Numeric code was adapted to make it more maintainable and flexible enough to implement the novel features of Numarray. This new project was part of SciPy. To avoid installing a whole package just to get an array object, this new package was separated and called NumPy.

scipy depends on numpy and imports many numpy functions into its namespace for convenience.

4
votes

Regarding the linalg package - the scipy functions will call lapack and blas, which are available in highly optimised versions on many platforms and offer very good performance, particularly for operations on reasonably large dense matrices. On the other hand, they are not easy libraries to compile, requiring a fortran compiler and many platform specific tweaks to get full performance. Therefore, numpy provides simple implementations of many common linear algebra functions which are often good enough for many purposes.

1
votes

From Lectures on 'Quantitative Economics'

SciPy is a package that contains various tools that are built on top of NumPy, using its array data type and related functionality

In fact, when we import SciPy we also get NumPy, as can be seen from the SciPy initialization file

# Import numpy symbols to scipy name space
import numpy as _num
linalg = None
from numpy import *
from numpy.random import rand, randn
from numpy.fft import fft, ifft
from numpy.lib.scimath import *

__all__  = []
__all__ += _num.__all__
__all__ += ['randn', 'rand', 'fft', 'ifft']

del _num
# Remove the linalg imported from numpy so that the scipy.linalg package can be
# imported.
del linalg
__all__.remove('linalg')

However, it’s more common and better practice to use NumPy functionality explicitly

import numpy as np

a = np.identity(3)

What is useful in SciPy is the functionality in its subpackages

  • scipy.optimize, scipy.integrate, scipy.stats, etc.
1
votes

In addition to the SciPy FAQ describing the duplication is mainly for backwards compatibility, it is further clarified in the NumPy documentation to say that

Optionally SciPy-accelerated routines (numpy.dual)

Aliases for functions which may be accelerated by Scipy.

SciPy can be built to use accelerated or otherwise improved libraries for FFTs, linear algebra, and special functions. This module allows developers to transparently support these accelerated functions when SciPy is available but still support users who have only installed NumPy.

For brevity, these are:

  • Linear algebra
  • FFT
  • The Modified Bessel function of the first kind, order 0

Also, from the SciPy Tutorial:

The top level of SciPy also contains functions from NumPy and numpy.lib.scimath. However, it is better to use them directly from the NumPy module instead.

So, for new applications, you should prefer the NumPy version of the array operations that are duplicated in the top level of SciPy. For the domains listed above, you should prefer those in SciPy and check backward compatibility if necessary in NumPy.

In my personal experience, most of the array functions I use exist in the top level of NumPy (except for random). However, all the domain specific routines exist in subpackages of SciPy, so I rarely use anything from the top level of SciPy.