Python打造本地百度搜索工具:网页爬取与索引想要随时随地进行百度搜索,即使没有网络连接?本文将带你使用Python创建一个简易的本地百度搜索工具,通过爬取百度搜索结果并建立本地索引,实现离线搜索功能。功能亮点:* 爬取百度搜索结果: 使用requests库获取百度搜索结果页面。* 解析HTML: 使用BeautifulSoup库提取标题和链接等关键信息。* 建立本地索引: 将爬取到的信息保存到本地文件,方便快速检索。* 用户界面: 使用Tkinter库创建图形界面,提供友好的搜索体验。* SEO优化: 标题和描述包含相关关键词,提升搜索引擎可见性。**完整代码:**pythonimport requestsfrom bs4 import BeautifulSoupimport timeimport tkinter as tkimport webbrowserimport randomimport osimport redef get_random_user_agent(): user_agents = [ 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36', ] return random.choice(user_agents)def crawl_baidu(keyword, page_limit): headers = { 'User-Agent': get_random_user_agent() } results = [] for page in range(1, page_limit + 1): url = f'https://www.baidu.com/s?wd={keyword}&pn={(page - 1) * 10}' # 添加随机延迟 delay = random.uniform(0.5, 1.0) time.sleep(delay) response = requests.get(url, headers=headers) soup = BeautifulSoup(response.text, 'html.parser') for result in soup.find_all('div', class_='result'): result_title = result.find('h3').get_text() result_url = result.find('a')['href'] results.append((result_title, result_url)) return resultsdef open_url(url): webbrowser.open(url)def crawl_and_index(): keywords = re.split(r'[,,/s]+', entry_keywords.get()) # 获取关键词列表 page_limit = int(entry_pages.get()) # 获取指定的爬取页数 # 创建文件夹用于保存网页文件 if not os.path.exists('webpages'): os.makedirs('webpages') # 爬取并保存网页文件 for keyword in keywords: search_results = crawl_baidu(keyword, page_limit) if len(search_results) > 0: file_name = f'webpages/{keyword}.html' with open(file_name, 'w', encoding='utf-8') as file: for index, (title, url) in enumerate(search_results, start=1): file.write(f'{index}. {title}/n') file.write(f'{url}/n') file.write('/n') else: print(f'关键词 '{keyword}' 没有搜索结果')def search_local(): keyword = entry_search.get() result_text.delete('1.0', tk.END) # 遍历网页文件,搜索匹配的结果 for file_name in os.listdir('webpages'): with open(f'webpages/{file_name}', 'r', encoding='utf-8') as file: lines = file.readlines() # 确保行数足够 if len(lines) < 2: continue found_results = {} for i in range(0, len(lines), 2): if i + 1 >= len(lines): break title = lines[i].strip() url = lines[i + 1].strip() if keyword.lower() in title.lower() or keyword.lower() in url.lower(): found_results[len(found_results) + 1] = (title, url) if len(found_results) > 0: result_text.insert(tk.END, f'搜索结果 - {file_name[:-5]}:/n/n', 'title') for index, (title, url) in found_results.items(): result_text.insert(tk.END, f'{index}. {title}/n', 'found_title') result_text.insert(tk.END, f'{url}/n', f'link{index}') result_text.tag_configure(f'link{index}', foreground='blue', underline=True) result_text.tag_bind(f'link{index}', '', lambda event, url=url: open_url(url)) result_text.insert(tk.END, '/n') if result_text.get('1.0', tk.END) == '/n': result_text.insert(tk.END, '没有搜索结果/n')# 创建UI界面window = tk.Tk()window.title('百度搜索')window.geometry('800x600')label_keywords = tk.Label(window, text='请输入关键词(用逗号或空格隔开):')label_keywords.pack()entry_keywords = tk.Entry(window)entry_keywords.pack()label_pages = tk.Label(window, text='请输入爬取页数:')label_pages.pack()entry_pages = tk.Entry(window)entry_pages.pack()crawl_button = tk.Button(window, text='爬取并索引', command=crawl_and_index)crawl_button.pack()label_search = tk.Label(window, text='请输入搜索关键词:')label_search.pack()entry_search = tk.Entry(window)entry_search.pack()search_button = tk.Button(window, text='搜索', command=search_local)search_button.pack()scrollbar = tk.Scrollbar(window)scrollbar.pack(side=tk.RIGHT, fill=tk.Y)result_text = tk.Text(window, yscrollcommand=scrollbar.set)result_text.pack(fill=tk.BOTH)scrollbar.config(command=result_text.yview)window.mainloop()**使用方法:1. 将代码保存为Python文件,例如 baidu_search.py。2. 在终端中运行 python baidu_search.py。3. 在弹出的窗口中输入关键词和爬取页数,点击“爬取并索引”按钮。4. 等待爬取完成后,输入搜索关键词,点击“搜索”按钮即可查看结果。注意: 爬取网页数据时请遵守 robots.txt 协议,合理设置爬取频率,避免对百度服务器造成压力。 本工具仅供学习交流使用,请勿用于商业用途。希望这个工具能帮助你在没有网络连接的情况下也能方便地进行百度搜索!

Python打造本地百度搜索工具:网页爬取与索引

原文地址: http://www.cveoy.top/t/topic/utl 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录