They are serialization protocols, primarily. Any time you need to transfer data between machines or processes, or store it on disk etc, it needs to be serialized.
Xml / json / etc work ok, but they have certain overheads that make them undesirable - in addition to limited features, they are relatively large, and computationally expensive to process in either direction. Size can be improved by compression, but that adds yet more to the processing cost. They do have the advantage of being human-readable, but: most data is not read by humans.
Now people could spend ages manually writing tedious, bug-ridden, sub-optimal, non-portable formats that are less verbose, or they can use well-tested general-purpose serialization formats that are well-documented, cross-platform, cheap-to-process, and designed by people who spend far too long worrying about serialization in order to be friendly - for example, version tolerant. Ideally, it would also allow a platform-neutral description layer (think "wsdl" or "mex") that allows you to easily say "here's what the data looks like" to any other dev (without knowing what tools/language/platform they are using), and have them consume the data painlessly without writing a new serializer/deserializer from scratch.
That is where protobuf and thrift come in.
In most cases volume-wise, I would actually expect both ends to be in the same technology in the same company: simply, they need to get data from A to B with the minimum of fuss and overhead, or they need to store it and load it back later (for example, we use protobuf inside redis blobs as a secondary cache).