563
votes

The character ????‍????‍????‍???? (family with two women, one girl, and one boy) is encoded as such:

U+1F469 WOMAN,
‍U+200D ZWJ,
U+1F469 WOMAN,
U+200D ZWJ,
U+1F467 GIRL,
U+200D ZWJ,
U+1F466 BOY

So it's very interestingly-encoded; the perfect target for a unit test. However, Swift doesn't seem to know how to treat it. Here's what I mean:

"????‍????‍????‍????".contains("????‍????‍????‍????") // true
"????‍????‍????‍????".contains("????") // false
"????‍????‍????‍????".contains("\u{200D}") // false
"????‍????‍????‍????".contains("????") // false
"????‍????‍????‍????".contains("????") // true

So, Swift says it contains itself (good) and a boy (good!). But it then says it does not contain a woman, girl, or zero-width joiner. What's happening here? Why does Swift know it contains a boy but not a woman or girl? I could understand if it treated it as a single character and only recognized it containing itself, but the fact that it got one subcomponent and no others baffles me.

This does not change if I use something like "????".characters.first!.


Even more confounding is this:

let manual = "\u{1F469}\u{200D}\u{1F469}\u{200D}\u{1F467}\u{200D}\u{1F466}"
Array(manual.characters) // ["????‍", "????‍", "????‍", "????"]

Even though I placed the ZWJs in there, they aren't reflected in the character array. What followed was a little telling:

manual.contains("????") // false
manual.contains("????") // false
manual.contains("????") // true

So I get the same behavior with the character array... which is supremely annoying, since I know what the array looks like.

This also does not change if I use something like "????".characters.first!.

6
Comments are not for extended discussion; this conversation has been moved to chat.Martijn Pieters♦
Fixed in Swift 4. "👩‍👩‍👧‍👦".contains("\u{200D}") still returns false, not sure if that's a bug or feature.Kevin
Yikes. Unicode has ruined text. It's turned plain text into a markup language.Boann
@Boann yes and no... a lot of these changes were put in to make en/decoding things like Hangul Jamo (255 codepoints) not an absolute nightmare like it was for Kanji (13,108 codepoints) and Chinese Ideographs (199,528 codepoints). Of course, it's more complicated and interesting than the length of an an SO comment could allow, so I encourage you to check it out yourself :DKy Leggiero

6 Answers

423
votes

This has to do with how the String type works in Swift, and how the contains(_:) method works.

The '👩‍👩‍👧‍👦 ' is what's known as an emoji sequence, which is rendered as one visible character in a string. The sequence is made up of Character objects, and at the same time it is made up of UnicodeScalar objects.

If you check the character count of the string, you'll see that it is made up of four characters, while if you check the unicode scalar count, it will show you a different result:

print("👩‍👩‍👧‍👦".characters.count)     // 4
print("👩‍👩‍👧‍👦".unicodeScalars.count) // 7

Now, if you parse through the characters and print them, you'll see what seems like normal characters, but in fact the three first characters contain both an emoji as well as a zero-width joiner in their UnicodeScalarView:

for char in "👩‍👩‍👧‍👦".characters {
    print(char)

    let scalars = String(char).unicodeScalars.map({ String($0.value, radix: 16) })
    print(scalars)
}

// 👩‍
// ["1f469", "200d"]
// 👩‍
// ["1f469", "200d"]
// 👧‍
// ["1f467", "200d"]
// 👦
// ["1f466"]

As you can see, only the last character does not contain a zero-width joiner, so when using the contains(_:) method, it works as you'd expect. Since you aren't comparing against emoji containing zero-width joiners, the method won't find a match for any but the last character.

To expand on this, if you create a String which is composed of an emoji character ending with a zero-width joiner, and pass it to the contains(_:) method, it will also evaluate to false. This has to do with contains(_:) being the exact same as range(of:) != nil, which tries to find an exact match to the given argument. Since characters ending with a zero-width joiner form an incomplete sequence, the method tries to find a match for the argument while combining characters ending with a zero-width joiners into a complete sequence. This means that the method won't ever find a match if:

  1. the argument ends with a zero-width joiner, and
  2. the string to parse doesn't contain an incomplete sequence (i.e. ending with a zero-width joiner and not followed by a compatible character).

To demonstrate:

let s = "\u{1f469}\u{200d}\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}" // 👩‍👩‍👧‍👦

s.range(of: "\u{1f469}\u{200d}") != nil                            // false
s.range(of: "\u{1f469}\u{200d}\u{1f469}") != nil                   // false

However, since the comparison only looks ahead, you can find several other complete sequences within the string by working backwards:

s.range(of: "\u{1f466}") != nil                                    // true
s.range(of: "\u{1f467}\u{200d}\u{1f466}") != nil                   // true
s.range(of: "\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}") != nil  // true

// Same as the above:
s.contains("\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}")          // true

The easiest solution would be to provide a specific compare option to the range(of:options:range:locale:) method. The option String.CompareOptions.literal performs the comparison on an exact character-by-character equivalence. As a side note, what's meant by character here is not the Swift Character, but the UTF-16 representation of both the instance and comparison string – however, since String doesn't allow malformed UTF-16, this is essentially equivalent to comparing the Unicode scalar representation.

Here I've overloaded the Foundation method, so if you need the original one, rename this one or something:

extension String {
    func contains(_ string: String) -> Bool {
        return self.range(of: string, options: String.CompareOptions.literal) != nil
    }
}

Now the method works as it "should" with each character, even with incomplete sequences:

s.contains("👩")          // true
s.contains("👩\u{200d}")  // true
s.contains("\u{200d}")    // true
111
votes

The first problem is you're bridging to Foundation with contains (Swift's String is not a Collection), so this is NSString behavior, which I don't believe handles composed Emoji as powerfully as Swift. That said, Swift I believe is implementing Unicode 8 right now, which also needed revision around this situation in Unicode 10 (so this may all change when they implement Unicode 10; I haven't dug into whether it will or not).

To simplify thing, let's get rid of Foundation, and use Swift, which provides views that are more explicit. We'll start with characters:

"👩‍👩‍👧‍👦".characters.forEach { print($0) }
👩‍
👩‍
👧‍
👦

OK. That's what we expected. But it's a lie. Let's see what those characters really are.

"👩‍👩‍👧‍👦".characters.forEach { print(String($0).unicodeScalars.map{$0}) }
["\u{0001F469}", "\u{200D}"]
["\u{0001F469}", "\u{200D}"]
["\u{0001F467}", "\u{200D}"]
["\u{0001F466}"]

Ah… So it's ["👩ZWJ", "👩ZWJ", "👧ZWJ", "👦"]. That makes everything a bit more clear. 👩 is not a member of this list (it's "👩ZWJ"), but 👦 is a member.

The problem is that Character is a "grapheme cluster," which composes things together (like attaching the ZWJ). What you're really searching for is a unicode scalar. And that works exactly as you're expecting:

"👩‍👩‍👧‍👦".unicodeScalars.contains("👩") // true
"👩‍👩‍👧‍👦".unicodeScalars.contains("\u{200D}") // true
"👩‍👩‍👧‍👦".unicodeScalars.contains("👧") // true
"👩‍👩‍👧‍👦".unicodeScalars.contains("👦") // true

And of course we can also look for the actual character that is in there:

"👩‍👩‍👧‍👦".characters.contains("👩\u{200D}") // true

(This heavily duplicates Ben Leggiero's points. I posted this before noticing he'd answered. Leaving in case it is clearer to anyone.)

75
votes

It seems that Swift considers a ZWJ to be an extended grapheme cluster with the character immediately preceding it. We can see this when mapping the array of characters to their unicodeScalars:

Array(manual.characters).map { $0.description.unicodeScalars }

This prints the following from LLDB:

▿ 4 elements
  ▿ 0 : StringUnicodeScalarView("👩‍")
    - 0 : "\u{0001F469}"
    - 1 : "\u{200D}"
  ▿ 1 : StringUnicodeScalarView("👩‍")
    - 0 : "\u{0001F469}"
    - 1 : "\u{200D}"
  ▿ 2 : StringUnicodeScalarView("👧‍")
    - 0 : "\u{0001F467}"
    - 1 : "\u{200D}"
  ▿ 3 : StringUnicodeScalarView("👦")
    - 0 : "\u{0001F466}"

Additionally, .contains groups extended grapheme clusters into a single character. For instance, taking the hangul characters , , and (which combine to make the Korean word for "one": 한):

"\u{1112}\u{1161}\u{11AB}".contains("\u{1112}") // false

This could not find because the three codepoints are grouped into one cluster which acts as one character. Similarly, \u{1F469}\u{200D} (WOMAN ZWJ) is one cluster, which acts as one character.

19
votes

The other answers discuss what Swift does, but don't go into much detail about why.

Do you expect “Å” to equal “Å”? I expect you would.

One of these is a letter with a combiner, the other is a single composed character. You can add many different combiners to a base character, and a human would still consider it to be a single character. To deal with this sort of discrepancy the concept of a grapheme was created to represent what a human would consider a character regardless of the codepoints used.

Now text messaging services have been combining characters into graphical emoji for years :) → 🙂. So various emoji were added to Unicode.
These services also started combining emoji together into composite emoji.
There of course is no reasonable way to encode all possible combinations into individual codepoints, so The Unicode Consortium decided to expand on the concept of graphemes to encompass these composite characters.

What this boils down to is "👩‍👩‍👧‍👦" should be considered as a single "grapheme cluster" if you trying to work with it at the grapheme level, as Swift does by default.

If you want to check if it contains "👦" as a part of that, then you should go down to a lower level.


I don't know Swift syntax so here is some Perl 6 which has similar level of support for Unicode.
(Perl 6 supports Unicode version 9 so there may be discrepancies)

say "\c[family: woman woman girl boy]" eq "👩‍👩‍👧‍👦"; # True

# .contains is a Str method only, in Perl 6
say "👩‍👩‍👧‍👦".contains("👩‍👩‍👧‍👦")    # True
say "👩‍👩‍👧‍👦".contains("👦");        # False
say "👩‍👩‍👧‍👦".contains("\x[200D]");  # False

# comb with no arguments splits a Str into graphemes
my @graphemes = "👩‍👩‍👧‍👦".comb;
say @graphemes.elems;                # 1

Let's go down a level

# look at it as a list of NFC codepoints
my @components := "👩‍👩‍👧‍👦".NFC;
say @components.elems;                     # 7

say @components.grep("👦".ord).Bool;       # True
say @components.grep("\x[200D]".ord).Bool; # True
say @components.grep(0x200D).Bool;         # True

Going down to this level can make some things harder though.

my @match = "👩‍👩‍👧‍👦".ords;
my $l = @match.elems;
say @components.rotor( $l => 1-$l ).grep(@match).Bool; # True

I assume that .contains in Swift makes that easier, but that doesn't mean there aren't other things which become more difficult.

Working at this level makes it much easier to accidentally split a string in the middle of a composite character for example.


What you are inadvertently asking is why does this higher level representation not work like a lower level representation would. The answer is of course, it's not supposed to.

If you are asking yourself “why does this have to be so complicated”, the answer is of course “humans”.

18
votes

Swift 4.0 update

String received lots of revisions in Swift 4 update, as documented in SE-0163. Two emoji are used for this demo representing two different structures. Both are combined with a sequence of emoji.

👍🏽 is the combination of two emoji, 👍 and 🏽

👩‍👩‍👧‍👦 is the combination of four emoji, with zero width joiner connected. The format is 👩‍joiner👩‍joiner👧‍joiner👦

1. Counts

In Swift 4.0 emoji is counted as grapheme cluster. Every single emoji is counted as 1. The count property is also directly available for string. So you can directly call it like this.

"👍🏽".count  // 1. Not available on swift 3
"👩‍👩‍👧‍👦".count  // 1. Not available on swift 3

Character array of a string is also counted as grapheme clusters in Swift 4.0, so both of the following codes print 1. These two emoji are examples of emoji sequences, where several emoji are combined together with or without zero width joiner \u{200d} between them. In swift 3.0, character array of such string separates out each emoji and results in an array with multiple elements (emoji). The joiner is ignored in this process. However, in Swift 4.0, character array sees all emoji as one piece. So that of any emoji will always be 1.

"👍🏽".characters.count  // 1. In swift 3, this prints 2
"👩‍👩‍👧‍👦".characters.count  // 1. In swift 3, this prints 4

unicodeScalars remains unchanged in Swift 4. It provides the unique Unicode characters in the given string.

"👍🏽".unicodeScalars.count  // 2. Combination of two emoji
"👩‍👩‍👧‍👦".unicodeScalars.count  // 7. Combination of four emoji with joiner between them

2. Contains

In Swift 4.0, contains method ignores zero width joiner in emoji. So it returns true for any of the four emoji components of "👩‍👩‍👧‍👦", and return false if you check for the joiner. However, in Swift 3.0, the joiner is not ignored and is combined with the emoji in front of it. So when you check if "👩‍👩‍👧‍👦" contains the first three component emoji, the result will be false

"👍🏽".contains("👍")       // true
"👍🏽".contains("🏽")        // true
"👩‍👩‍👧‍👦".contains("👩‍👩‍👧‍👦")       // true
"👩‍👩‍👧‍👦".contains("👩")       // true. In swift 3, this prints false
"👩‍👩‍👧‍👦".contains("\u{200D}") // false
"👩‍👩‍👧‍👦".contains("👧")       // true. In swift 3, this prints false
"👩‍👩‍👧‍👦".contains("👦")       // true
0
votes

Emojis, much like the unicode standard, are deceptively complicated. Skin tones, genders, jobs, groups of people, zero-width joiner sequences, flags (2 character unicode) and other complications can make emoji parsing messy. A Christmas Tree, a Slice of Pizza, or a Pile of Poop can all be represented with a single Unicode code point. Not to mention that when new emojis are introduced, there is a delay between iOS support and emoji release. That and the fact that different versions of iOS support different versions of the unicode standard.

TL;DR. I have worked on these features and opened sourced a library I am the author for JKEmoji to help parse strings with emojis. It makes parsing as easy as:

print("I love these emojis 👩‍👩‍👧‍👦💪🏾🧥👧🏿🌈".emojiCount)

5

It does that by routinely refreshing a local database of all recognized emojis as of the latest unicode version (12.0 as of recently) and cross-referencing them with what is recognized as a valid emoji in the running OS version by looking at the bitmap representation of an unrecognized emoji character.

NOTE

A previous answer got deleted for advertising my library without clearly stating that I am the author. I am acknowledging this again.