e.g name field is now splitted into first_name and last_name
The Avro definition of a "backwards compatible" schema could not allow you to add these new fields without 1) keeping the old name field 2) adding defaults to the new fields - https://docs.confluent.io/current/schema-registry/avro.html
If your Consumers upgrade their schema first, they see the old name field, continuing to be sent by old producers as well as interpreting the defaults for the new fields until the producers upgrade and start sending the new fields
If the producers upgrade first, then consumers will never see the new fields, so the producers should still send out the name field, or opt to send some garbage value that'll start intentionally breaking consumers (e.g. make the field nullable to begin with but never actually send a null, then start sending a null, while consumers assume it cannot be null)
In either case, I feel like your record processing logic has to detect which fields are available and not null or their default values.
But, compare that to JSON or any plain string (like CSV), and you have no guarantees of what fields should be there, if they're nullable, or what types they are (is a date a string or a long?), thus you can't guarantee what objects your clients will internally map messages into for processing... That's a larger advantage of Avro I find than compatibility rules
Personally, I find enforcing FULL_TRANSITIVE compatibility on the registry works best when you have little to no communication between your Kafka users