以下是一个使用Python爬虫爬取英雄联盟所有皮肤图片的示例代码:\n\npython\nimport requests\nfrom bs4 import BeautifulSoup\nimport os\n\n# 创建目录\ndef create_directory(path):\n if not os.path.exists(path):\n os.makedirs(path)\n\n# 下载图片\ndef download_image(url, path):\n response = requests.get(url, stream=True)\n if response.status_code == 200:\n with open(path, 'wb') as file:\n for chunk in response.iter_content(1024):\n file.write(chunk)\n\n# 获取皮肤图片链接\ndef get_skin_images(url):\n response = requests.get(url)\n soup = BeautifulSoup(response.text, 'html.parser')\n images = soup.find_all('img', class_='skin-pic')\n skin_images = []\n for image in images:\n skin_images.append(image['data-img'])\n return skin_images\n\n# 爬取皮肤图片\ndef crawl_skin_images():\n base_url = 'https://lol.qq.com/data/info-defail.shtml?id={}'\n hero_ids = [1, 2, 3, 4, 5] # 假设只爬取5个英雄的皮肤图片\n save_directory = 'skins'\n create_directory(save_directory)\n\n for hero_id in hero_ids:\n url = base_url.format(hero_id)\n skin_images = get_skin_images(url)\n hero_directory = os.path.join(save_directory, str(hero_id))\n create_directory(hero_directory)\n\n for i, image_url in enumerate(skin_images):\n image_name = '{}_{}.jpg'.format(hero_id, i)\n image_path = os.path.join(hero_directory, image_name)\n download_image(image_url, image_path)\n\n print('已下载图片:{}'.format(image_path))\n\ncrawl_skin_images()\n\n\n请注意,这只是一个示例代码,你可能需要根据具体的需求进行适当的修改。另外,爬取网站的数据可能涉及到版权和法律问题,请确保你有合法的权限和授权来进行该操作。


原文地址: https://www.cveoy.top/t/topic/pxxi 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录