5
votes

For a project, I need to replicate some results that currently exist in Stata output files (.dta) and were computed from an older Stata script. The new version of the project needs to be written in Python.

The specific part I am having difficulty with is matching quantile breakpoint calculations based on the weighted version of Stata's xtile command. Note that ties between data points won't matter with the weights, and the weights I am using come from a continuous quantity, so ties are extremely unlikely (and there are no ties in my test data set). So miscategorizing due to ties is not it.

I have read the Wikipedia article on weighted percentiles and also this Cross-Validated post describing an alternate algorithm that should replicate R's type-7 quantiles.

I've implemented both of the weighted algorithms (code at the bottom), but I'm still not matching very well against the computed quantiles in the Stata output.

Does anyone know the specific algorithm used by the Stata routine? The docs did not describe this clearly. It says something about taking a mean at flat portions of the CDF to invert it, but this hardly describes the actual algorithm and is ambiguous about whether it's doing any other interpolation.

Note that numpy.percentile and scipy.stats.mstats.mquantiles do not accept weights and cannot perform weighted quantiles, just regular equal-weighted ones. The crux of my issue lies in the need to use weights.

Note: I've debugged both methods below quite a lot, but feel free to suggest a bug in a comment if you see one. I have tested both methods on smaller data sets, and the results are good and also match R's output for the cases where I can guarantee what method R is using. The code is not that elegant yet and too much is copied between the two types, but all that will be fixed later when I believe the output is what I need.

The problem is that I don't know the method Stata xtile uses, and I want to reduce mismatches between the code below and Stata xtile when run on the same data set.

Algorithms that I've tried:

import numpy as np

def mark_weighted_percentiles(a, labels, weights, type):
# a is an input array of values.
# weights is an input array of weights, so weights[i] goes with a[i]
# labels are the names you want to give to the xtiles
# type refers to which weighted algorithm. 
#      1 for wikipedia, 2 for the stackexchange post.

# The code outputs an array the same shape as 'a', but with
# labels[i] inserted into spot j if a[j] falls in x-tile i.
# The number of xtiles requested is inferred from the length of 'labels'.


# First type, "vanilla" weights from Wikipedia article.
if type == 1:

    # Sort the values and apply the same sort to the weights.
    N = len(a)
    sort_indx = np.argsort(a)
    tmp_a = a[sort_indx].copy()
    tmp_weights = weights[sort_indx].copy()

    # 'labels' stores the name of the x-tiles the user wants,
    # and it is assumed to be linearly spaced between 0 and 1
    # so 5 labels implies quintiles, for example.
    num_categories = len(labels)
    breaks = np.linspace(0, 1, num_categories+1)

    # Compute the percentile values at each explicit data point in a.
    cu_weights = np.cumsum(tmp_weights)
    p_vals = (1.0/cu_weights[-1])*(cu_weights - 0.5*tmp_weights)

    # Set up the output array.
    ret = np.repeat(0, len(a))
    if(len(a)<num_categories):
        return ret

    # Set up the array for the values at the breakpoints.
    quantiles = []


    # Find the two indices that bracket the breakpoint percentiles.
    # then do interpolation on the two a_vals for those indices, using
    # interp-weights that involve the cumulative sum of weights.
    for brk in breaks:
        if brk <= p_vals[0]: 
            i_low = 0; i_high = 0;
        elif brk >= p_vals[-1]:
            i_low = N-1; i_high = N-1;
        else:
            for ii in range(N-1):
                if (p_vals[ii] <= brk) and (brk < p_vals[ii+1]):
                    i_low  = ii
                    i_high = ii + 1       

        if i_low == i_high:
            v = tmp_a[i_low]
        else:
            # If there are two brackets, then apply the formula as per Wikipedia.
            v = tmp_a[i_low] + ((brk-p_vals[i_low])/(p_vals[i_high]-p_vals[i_low]))*(tmp_a[i_high]-tmp_a[i_low])

        # Append the result.
        quantiles.append(v)

    # Now that the weighted breakpoints are set, just categorize
    # the elements of a with logical indexing.
    for i in range(0, len(quantiles)-1):
        lower = quantiles[i]
        upper = quantiles[i+1]
        ret[ np.logical_and(a>=lower, a<upper) ] = labels[i] 

    #make sure upper and lower indices are marked
    ret[a<=quantiles[0]] = labels[0]
    ret[a>=quantiles[-1]] = labels[-1]

    return ret

# The stats.stackexchange suggestion.
elif type == 2:

    N = len(a)
    sort_indx = np.argsort(a)
    tmp_a = a[sort_indx].copy()
    tmp_weights = weights[sort_indx].copy()


    num_categories = len(labels)
    breaks = np.linspace(0, 1, num_categories+1)

    cu_weights = np.cumsum(tmp_weights)

    # Formula from stats.stackexchange.com post.
    s_vals = [0.0];
    for ii in range(1,N):
        s_vals.append( ii*tmp_weights[ii] + (N-1)*cu_weights[ii-1])
    s_vals = np.asarray(s_vals)

    # Normalized s_vals for comapring with the breakpoint.
    norm_s_vals = (1.0/s_vals[-1])*s_vals 

    # Set up the output variable.
    ret = np.repeat(0, N)
    if(N < num_categories):
        return ret

    # Set up space for the values at the breakpoints.
    quantiles = []


    # Find the two indices that bracket the breakpoint percentiles.
    # then do interpolation on the two a_vals for those indices, using
    # interp-weights that involve the cumulative sum of weights.
    for brk in breaks:
        if brk <= norm_s_vals[0]: 
            i_low = 0; i_high = 0;
        elif brk >= norm_s_vals[-1]:
            i_low = N-1; i_high = N-1;
        else:
            for ii in range(N-1):
                if (norm_s_vals[ii] <= brk) and (brk < norm_s_vals[ii+1]):
                    i_low  = ii
                    i_high = ii + 1   

        if i_low == i_high:
            v = tmp_a[i_low]
        else:
            # Interpolate as in the type 1 method, but using the s_vals instead.
            v = tmp_a[i_low] + (( (brk*s_vals[-1])-s_vals[i_low])/(s_vals[i_high]-s_vals[i_low]))*(tmp_a[i_high]-tmp_a[i_low])
        quantiles.append(v)

    # Now that the weighted breakpoints are set, just categorize
    # the elements of a as usual. 
    for i in range(0, len(quantiles)-1):
        lower = quantiles[i]
        upper = quantiles[i+1]
        ret[ np.logical_and( a >= lower, a < upper ) ] = labels[i] 

    #make sure upper and lower indices are marked
    ret[a<=quantiles[0]] = labels[0]
    ret[a>=quantiles[-1]] = labels[-1]

    return ret
2
Another sub-question that arises is: if you are making xtiles on a particular column of data, but other columns of the data have some values missing in Stata's working memory, will Stata ignore the rows that have missing data when forming the breakpoints, or not? I have small evidence to suggest not, but cannot find confirmation.ely

2 Answers

2
votes

Here's a screenshot of the formulas from Stata 12 manuals (StataCorp. 2011. Stata Statistical Software: Release 12. College Station, TX: StataCorp LP, p. 501-502). If this does not help, you might ask this question on Statalist or contact Philip Ryan (the author of the original code) directly.

enter image description hereenter image description here

0
votes

Did you know that you can just read Stata's code?

. ssc install adoedit
. adoedit xtile