I realized I forgot to show you something in class, so I created a new gist to show it.
It is kbacon4.txt, which is the file saved by kevin_bacon4.py — the fourth version of the script we went over in class when talking about regex (the four scripts are in another gist). This text file is the final list of local links scraped off the Kevin Bacon Wikipedia page.
- kevin_bacon1.py ➞ kbacon1.txt (781 lines)
- kevin_bacon2.py ➞ kbacon2.txt (681 lines)
- kevin_bacon3.py ➞ kbacon3.txt (451 lines)
- kevin_bacon4.py ➞ kbacon4.txt (400 lines)
The point is that by refining, first, the “find” parameters, and second, the regex string, we match fewer and fewer links, and eventually we get the set of only the links we want to crawl.
Scan the list of 400 links to see that they are all links to Wikipedia pages, although there might still be some links that we would like to eliminate — depending on WHY we are collecting these links.