Ok so first off, you do not need Selenium. It's very rare you ever need Selenium. Even with javascript/ajax calls. If you ever get that deep into ajax calls you just need to GET/POST XSFR-Token keys back and forth until you get the data you want. Selenium is really heavy, bloated, and slow compared to simple HTTP calls via requests. Avoid it when you can. If you're completely stuck and don't know how to navigate ajax-post/request tokens, then by all means, use it. Better something than nothing.
Now, the reason you're not getting the desired response is that what your browser and python-requests package see are two completely different responses. So right from the start, you can't even navigate where you're going because you're looking at the wrong map. The browser has it's own custom map, and requests package has an entirely different map. That's where the package PPRINT comes in very handy (pictures below). PPRINT helps you see the response you get back clearer by formatting the text in a cleaner structure.
Lastly, I use Jupyter Notebook from Anaconda because it allows me to work on chunks of the code at a time without having to run the whole program. If you're not already using Jupyter Notebooks I suggest you give it a go. It will help you see how everything works with portions of your output "frozen in time".
Best of luck! Hope you weren't too discouraged. This all takes time.
Here is the workflow I used to solve your problem:
![enter image description here](https://i.stack.imgur.com/vHrTg.jpg)
![enter image description here](https://i.stack.imgur.com/ZeTIG.png)
from bs4 import BeautifulSoup
import requests
import pprint as pp
url = "https://www.youtube.com/watch?v=hHW1oY26kxQ"
response = requests.get(url, headers={'User-Agent':USER_AGENT})
soup = BeautifulSoup(response.text, 'lxml')
for div in soup.find_all("div", {"id": "watch7-user-header"}):
for a in div.find_all("a"):
continue
print(a["href"])