Here is a Python program that can download all the link pages in one website and save them to a txt file:

import requests
from bs4 import BeautifulSoup

# set the URL of the website to be scraped
url = "https://example.com"

# send a request to the URL and get the HTML content
response = requests.get(url)
html = response.content

# parse the HTML content with BeautifulSoup
soup = BeautifulSoup(html, "html.parser")

# find all the links in the HTML content
links = soup.find_all("a")

# create an empty list to store the link URLs
link_urls = []

# loop through the links and extract their URLs
for link in links:
    link_url = link.get("href")
    if link_url.startswith("http") or link_url.startswith("www"):
        link_urls.append(link_url)

# create a new file called "links.txt" and write the link URLs to it
with open("links.txt", "w") as f:
    for link_url in link_urls:
        f.write(link_url + "\n")

print("All link pages downloaded and saved to links.txt")

This program uses the requests library to send a request to the website and get the HTML content, then uses the BeautifulSoup library to parse the HTML content and find all the links. It then extracts the URLs of the links and stores them in a list. Finally, it creates a new file called "links.txt" and writes the link URLs to it.

I want a program with Python that can download all link page in one websiteand take them to a txt file

原文地址: http://www.cveoy.top/t/topic/bVba 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录