Is there a way to handle this situation with a stateful set? Can Kubernetes operator handle such things such as if I monitor the container and container died because of any reason then the operator kills the pod forcefully?
Not really if you are using StatefulSets. A Kubernetes operator will just do the same, kill the pod and Kubernetes will restart it. You could make it that it modifies your stateful set and removes a replica, but StatefulSets have an ordinal number so say even if you have 10 replicas and you change it to 8 then your ordinal numbers that will get removed will be 9 and 10 and those may not be the pods that you'd like to remove.
Nevertheless, you could create your own operator that manages the Pods on its own with its own controller. This without using any of the Kubernetes controllers like replicasets, deployments, statefulsets, jobs, etc. If would just be something unique to your workloads and would determine when pods get restarted, deleted, etc. If you'd like to go that route, there are a handful of projects that can help you get started:
Also, can we handle this situation with multi containers?
With an operator, it is what you make of it. You can always choose restart 'Never' but the lowest common workload denominator in Kubernetes is a Pod. In other words, if you have 3 containers in your pod and 2 of them are up but 1 of them is down, then the pod will not be 'Ready'