
免費獲取韻達快遞查詢API的使用指南
使用 OpenAI API 的 Google 新聞聚合器總結 Tech News 阿拉伯語
Google 新聞聚合器使用 OpenAI API 指出水新聞中的關鍵問題
事不宜遲,下面是您在上面的屏幕截圖中看到的應用程序代碼。
復制并粘貼下面的代碼,將其保存到 Python 文件中,您可以從 Windows 命令提示符或 Mac 終端運行。
將下面的代碼保存到名為 google_news_aggregator.py 的文件中。使用您自己的 OpenAI API 密鑰更新代碼。
import openai
import re
from GoogleNews import GoogleNews
import webbrowser
from tkinter import *
import requests
from bs4 import BeautifulSoup
openai.api_key = 'Use your own OpenAI API key'
def search_and_display():
# Get the search input from the input box
search_query = input_box.get()
# Create a GoogleNews object and search for news articles
googlenews = GoogleNews()
googlenews.search(search_query)
# Retrieve the search results and summarize each article
try:
result = googlenews.result()
except AttributeError:
print("No results found")
return
summaries = []
for article in result:
summary = summarize_article(article['desc'], article['link'])
summaries.append(summary)
# Update the text area with the search results and summaries
text_area.delete('1.0', END)
text_area.insert(END, f"Search results for '{search_query}':\n\n")
for i, article in enumerate(result):
text_area.insert(END, f"Article {i+1}\n")
text_area.insert(END, f"Title: {article['title']}\n", 'title')
text_area.insert(END, f"Summary: {summaries[i]}\n", 'content')
text_area.insert(END, article['link'], ('content', 'hyperlink'))
text_area.insert(END, "\n\n")
text_area.tag_configure('hyperlink', foreground='blue', underline=True)
text_area.tag_bind('hyperlink', '<Button-1>', open_link)
def summarize_article(article, url):
response = requests.get(url)
# three different web scraping methods are used to try and collect data from news urls to pass to OpenAI
text = ""
try:
soup = BeautifulSoup(response.content, "html.parser")
text = soup.get_text()
text = text[:1000] #This tries to scrape web page with 1000 character limit
except:
pass
if not text:
try:
text = response.json()
text = str(text)[:1000]
except:
pass
if not text:
try:
text = response.content
text = str(text)[:1000]
except:
pass
model_engine = "text-davinci-003"
# Get the prompt text from the text box
prompt_text = prompt_input.get("1.0", "end-1c")
# Add the article and scraped text to the prompt
prompt = f"{prompt_text}\n{article}\n\nHere is some additional scraped data for context. Ignore anything spurious such as HTML tags or social share/subscribe calls to action that doesn't relate to {article}:\n{text}"
response = openai.Completion.create(engine=model_engine, prompt=prompt, temperature=0.2, max_tokens=1500, n=1, stop=None)
summary = response.choices[0].text
return re.sub('\s+', ' ', summary).strip()
def open_link(event): # Define a function to open the link
text_widget = event.widget # Get the widget which is clicked
index = text_widget.index(f"@{event.x},{event.y}")
tag_names = text_widget.tag_names(index)
if 'hyperlink' in tag_names:
line_start = text_widget.index(f"{index} linestart")
line_end = text_widget.index(f"{index} lineend")
line_text = text_widget.get(line_start, line_end)
url_match = re.search("(?P<url>https?://[^\s]+)", line_text)
if url_match:
url = url_match.group("url")
webbrowser.open_new(url) # Open the URL in a new window
# Create a GoogleNews object
googlenews = GoogleNews()
# Create the Tkinter application and set the title
root = Tk()
root.title("Google News Aggregator")
root.configure(background='#F5F5F5')
# Create the input box label
input_label = Label(root, text="Enter search query:")
input_label.pack(padx=10, pady=10)
# Create the input box
input_box = Entry(root, width=50)
input_box.pack(padx=10, pady=10)
# Create the prompt label
prompt_label = Label(root, text="Enter prompt data:")
prompt_label.pack(padx=10, pady=10)
# Create the prompt input box
prompt_input = Text(root, height=5, width=50)
prompt_input.pack(padx=10, pady=10)
# Create the search button
search_button = Button(root, text="Search", command=search_and_display)
search_button.pack(padx=10, pady=10)
# Create the text area
text_area = Text(root, height=30, width=200, bg='#FFFFFF', fg='black')
scrollbar = Scrollbar(root)
scrollbar.pack(side=RIGHT, fill=Y)
text_area.pack(side=LEFT, fill=Y)
scrollbar.config(command=text_area.yview)
text_area.config(yscrollcommand=scrollbar.set)
text_area.insert(END, "Google News Aggregator\n\n")
text_area.tag_configure('title', background='lightblue', font=('Arial', 14, 'bold'))
text_area.tag_configure('content', background='yellow', font=('Arial', 12))
text_area.tag_configure('hyperlink', foreground='blue', underline=True)
text_area.tag_bind('hyperlink', '<Button-1>', open_link)
# Set the tag configuration for hyperlink text
text_area.tag_configure('hyperlink', foreground='blue', underline=True)
# Bind the hyperlink tag to open the link in a web browser
text_area.tag_bind('hyperlink', '<Button-1>', open_link)
# Start the main loop
root.mainloop()
盡管 Python 用戶可以復制粘貼并運行 Python 文件中的代碼,但我們假設您擁有支持 f 字符串的最新版本的 Python,并且您在查找和導入 Python 模塊方面經驗豐富。ChatGPT和Stackoverflow為解決代碼問題提供了出色的解決方案。
作為開發人員,您可以通過將新聞與 OpenAI 的下一代功能相結合來訪問一個非常強大的工具。通過這種組合,您可以創建一個個性化的新聞聚合器,它可以實時提供最新新聞,為您的用戶提供最重要文章的簡潔和自定義摘要。
這是通過運用機器學習和自然語言處理技術的強大能力來實現的,這些技術能夠輔助您自動化地完成新聞文章的搜索與摘要生成過程。通過利用這些技術,您可以幫助人們在當今快節奏的世界中保持消息靈通,同時簡化用戶消費新聞的流程。
因此,如果您想構建新聞聚合器或任何其他類型的 AI 驅動的應用程序,請考慮使用這些強大的工具來創建更加智能和個性化的用戶體驗。
如果這篇文章啟發了您,您應該認真查看 APILayer 提供的定制新聞聚合器 API,例如 mediastack 和 financelayer,它們具有多種編程語言的非常簡單的文檔,您和您的開發人員可以開始將其集成到您自己的軟件應用程序中。