as much as i understand you want to rate-limit the traffic to service which is running inside the POD-1.
there are multiple ways you can implement the rate-limiting but in your scenario if we see POD 2 will be sending the request to POD-1 on the internal network. you can not use the internal IP or cluster IP to restrict the rate-limiting.
to implement the rate-limiting on the internal network you can use the envoy proxy or service mesh if you are using it.
there is two types of rate limiting in istio using envoy proxy global and local.
https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/#:~:text=Global%20rate%20limiting%20uses%20a,the%20global%20rate%20limiting%20service.
i would suggest if you implement the instance level rate limiting you can use it using envoy local.
https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/#local-rate-limit
however there always other way around you can write code which check the Service or POD-2 IP using Kubernetes API and based on filtering count the rate-limiting. Your POD IP may get change if you re-deploy your service or deploy sprint release.
The above solution is not so good as it is not Agnostic You can not move your workloads out of Kubernetes without having to redesign your applications or completely rethink your infrastructure.
You can use the istio rate limiting if it's fit your scenario by sending the custom header in each request from POD2 to POD1. you can add any value in custom headers and based on that do a rate limiting.
Dont write the code for adding custom header you can also do it using the istio. example : Adding custom response headers using Istio's (1.6.0) envoy lua filter
Example rate limiting based on key value header : https://domagalski-j.medium.com/istio-rate-limits-for-egress-traffic-8697df490f68