I'm new to graph theory and have been solving a few problems lately. I was looking for a way to find the depth of every node in the graph. Here's how I formalized it on paper:
Run a for loop iterating over every vertex
If the vertex is not visited, run a DFS with that vertex as a source
- In the DFS, as long as we have more vertices to go to, we keep going (as in the Depth First Search) and we keep a counter, cnt1, which increments every time
- While backtracking in the recursive DFS call, we initialize a new counter starting from the current count at the last vertex, and give the value cnt1-cnt2 to each vertex so on and keep decreasing cnt2.
I'm not sure if this is correct and am not able to implement the same in code. Any suggestions? For the record, my graph is stored in the form of an adjacency list, in the form of an array of vectors like:
vector <int> a[100];
EDIT:The input graph is a collection of directed trees. We have a depth label for each node - denoting the number of nodes on the simple path from the root to it. Hence we need to find the maximum depth of each node