But to maximize write performance I guess it would be better to allow
multiple write sides at the same time. But how is consistency and
consensus handled in a system like this?
In event-sourced system the consistency on the write side is always strong. This is enforced by the aggregates and the Event store
by using optimistic locking: in case of a concurrent write (in fact events are only appended to the store) the hole command is retried. This is possible because aggregate command methods are pure (side effects free) methods. As long as the events are not persisted the command can be retried.
When two or more machines updates the state at the same time (which
one to choose and persist?)
Both. The first (always there is a first) command generate events that are persisted to the store. The secons command fails because of a low level concurent exception . Then it is retried by loading+applying all previous events, inclusiv those generated by the first command. Then the second command generate aditional events that are also persisted or throw an exception if the new state does not permit the second command to be handled.
You must notice that the second command is executed at least twice but each time the previous events (thus the state) are different.
The infrastructure keeps an aggregate version attached to each aggregate stream. Each event appending increase this version. There is a unique constraint on the aggregate id and version. This is probably how all the event stores are implemented.
When a machine misbehaves (unknowingly or knowingly) and propagates
faulty events to the rest of the network (how to detect this?)
I don't see how this could happen but if it is happening then it really depends on your understanding of a faulty event. You could have some Sagas/Process managers that analize the events and trigger some emails that are sent to a supervisor of some kind.