要实现这个功能,你可以使用Python的爬虫库requests和BeautifulSoup来抓取网页内容,并使用urllib库来下载图片。

首先,你需要安装所需的库。在命令行中运行以下命令:

pip install requests beautifulsoup4

然后,你可以使用以下代码来实现抓取和下载功能:

import os
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin


def download_image(url, save_path):
    response = requests.get(url, stream=True)
    if response.status_code == 200:
        with open(save_path, 'wb') as file:
            for chunk in response.iter_content(1024):
                file.write(chunk)


def scrape_heroes_and_skins():
    # 创建保存图片的文件夹
    save_folder = 'D:/hero'
    if not os.path.exists(save_folder):
        os.makedirs(save_folder)

    # 抓取英雄页面
    url = 'https://pvp.qq.com/web201605/herolist.shtml'
    response = requests.get(url)
    response.encoding = 'gbk'

    # 解析英雄页面
    soup = BeautifulSoup(response.text, 'html.parser')
    hero_list = soup.find_all('li', class_='hero-list')

    # 遍历每个英雄
    for hero in hero_list:
        # 获取英雄名称
        hero_name = hero.find('a').text

        # 创建英雄文件夹
        hero_folder = os.path.join(save_folder, hero_name)
        if not os.path.exists(hero_folder):
            os.makedirs(hero_folder)

        # 抓取英雄皮肤页面
        hero_url = urljoin(url, hero.find('a')['href'])
        response = requests.get(hero_url)
        response.encoding = 'gbk'

        # 解析英雄皮肤页面
        soup = BeautifulSoup(response.text, 'html.parser')
        skin_list = soup.find_all('li', class_='pic-pf-list')

        # 遍历每个皮肤
        for skin in skin_list:
            # 获取皮肤名称
            skin_name = skin.find('img')['alt']

            # 获取皮肤图片URL
            skin_url = urljoin(url, skin.find('img')['src'])

            # 下载皮肤图片
            save_path = os.path.join(hero_folder, f'{skin_name}.jpg')
            download_image(skin_url, save_path)


if __name__ == '__main__':
    scrape_heroes_and_skins()

以上代码会在D盘的hero文件夹中创建一个以英雄名称命名的文件夹,并在每个英雄文件夹中下载对应的皮肤图片。请确保你的电脑上有权限在D盘创建文件夹和保存文件


原文地址: https://www.cveoy.top/t/topic/hExm 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录