I’m using Scipy CurveFit to fit a Gaussian curve to data, and am interested in analysing the quality of the fit. I know CurveFit returns a useful pcov matrix, from which the standard deviation of each fitting parameter can be computed as sqrt(pcov[0,0]) for the parameter popt[0].
e.g. code snippet for this:
import numpy as np
from scipy.optimize import curve_fit
def gaussian(self, x, *p):
A, sigma, mu, y_offset = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2)) + y_offset
p0 = [1,2,3,4] #Initial guess of parameters
popt, pcov = curve_fit(gaussian, x,y, p0) #Return co-effs for fit and covariance
‘Parameter A is %f (%f uncertainty)’ % (popt[0], np.sqrt(pcov[0, 0]))
This gives an indication of the uncertainty in fitting parameters for each coefficient in the fitting curve equation, but I wonder how best to obtain an overall “quality of fit parameter” so that I can compare the quality of fit between different curve equations (e.g. Gaussian, Super Gaussian etc.)
On a simple level, I could just compute the percentage uncertainty in each coefficient and then average, although I wonder if there’s a better way? From searching online, and from the particularly useful “goodness of fit” Wikipedia page , I note there are many measures to describe this. I wonder if anyone knows whether any are built into Python packages / has any general advice for good ways to quantify curve fitting.
Thanks for any help!