15
votes

If I have a variable number of sets (let's call the number n), which have at most m elements each, what's the most efficient way to calculate the pairwise intersections for all pairs of sets? Note that this is different from the intersection of all n sets.

For example, if I have the following sets:

A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}

I want to be able to find:

intersect_AB={"c"}
intersect_BC={"c", "e"}
intersect_AC={"a", "c"}

Another acceptable format (if it makes things easier) would be a map of items in a given set to the sets that contain that same item. For example:

intersections_C={"a": {"A", "C"},
                 "c": {"A", "B", "C"}
                 "e": {"B", "C"}}

I know that one way to do so would be to create a dictionary mapping each value in the union of all n sets to a list of the sets in which it occurs and then iterate through all of those values to create lists such as intersections_C above, but I'm not sure how that scales as n increases and the sizes of the set become too large.

Some additional background information:

  1. Each of the sets are of roughly the same length, but are also very large (large enough that storing them all in memory is a realistic concern, and an algorithm which avoids that would be preferred though is not necessary)
  2. The size of the intersections between any two sets is very small compared to the size of the sets themselves
  3. If it helps, we can assume anything we need to about the ordering of the input sets.
3
Have you tried the method that you know works?Simeon Visser
I would suggest the following: Traverse all sets and build a map by tracking where you find each element. This is O(NlogN) (assuming that the dictionary adds a logarithmic overhead), where N is the total number of elements.nickie
I've tried the method I described on small samples, but the problem is that a lot of the data I will be using with this is user-fed. I would ideally like to be able to support much larger use cases, so I was wondering if there's a more common/efficient way to do this than the naive approach I described.ankushg
@nickie Is your idea to traverse the sets and make a dictionary independently for all n sets, making the dictionary only of size m per iteration rather than nm* to store all possible elements?ankushg
I think this can be done in linear time using a hash table, linear with respect to the size of the sets: O(N + M + N * c), where c is a constant that represents the cost of accessing an entry in the hash table, this constant will be proportional to the length of the strings in you sets.rendon

3 Answers

8
votes

this ought to do what you want

import random as RND
import string
import itertools as IT

mock some data

fnx = lambda: set(RND.sample(string.ascii_uppercase, 7))
S = [fnx() for c in range(5)]

generate an index list of the sets in S so the sets can be referenced more concisely below

idx = range(len(S))

get all possible unique pairs of the items in S; however, since set intersection is commutative, we want the combinations rather than permutations

pairs = IT.combinations(idx, 2)

write a function perform the set intersection

nt = lambda a, b: S[a].intersection(S[b])

fold this function over the pairs & key the result from each function call to its arguments

res = dict([ (t, nt(*t)) for t in pairs ])

the result below, formatted per the first option recited in the OP, is a dictionary in which the values are the set intersections of two sequences; each values keyed to a tuple comprised of the two indices of those sequences

this solution, is really just two lines of code: (i) calculate the permutations; (ii) then apply some function over each permutation, storing the returned value in a structured container (key-value) container

the memory footprint of this solution is minimal, but you can do even better by returning a generator expression in the last step, ie

res = ( (t, nt(*t)) for t in pairs )

notice that with this approach, neither the sequence of pairs nor the corresponding intersections have been written out in memory--ie, both pairs and res are iterators.

3
votes

If we can assume that the input sets are ordered, a pseudo-mergesort approach seems promising. Treating each set as a sorted stream, advance the streams in parallel, always only advancing those where the value is the lowest among all current iterators. Compare each current value with the new minimum every time an iterator is advanced, and dump the matches into your same-item collections.

-4
votes

How about using intersection method of set. See below:

A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}

intersect_AB = A.intersection(B)
intersect_BC = B.intersection(C)
intersect_AC = A.intersection(C)

print intersect_AB, intersect_BC, intersect_AC