├── pyproject.toml
├── README.md
├── google_finance_scraper/
│ └── __init__.py
└── tests/
└── __init__.py
導航到項目目錄并安裝 Playwright:
cd google-finance-scraper
poetry add playwright
poetry run playwright install
Google Finance 使用 JavaScript 動態加載內容。Playwright 可以呈現 JavaScript,因此適合從 Google Finance 抓取動態內容。
打開該 pyproject.toml 文件來檢查你的項目的依賴項,其中應該包括:
[tool.poetry.dependencies]
python = "^3.12"
playwright = "^1.46.0"
??*注意:* 撰寫本文時,的版本為 playwright , 1.46.0但可能會更改。請檢查最新版本,并 pyproject.toml 在必要時更新您的。
最后,在文件夾中創建一個 main.py 文件 google_finance_scraper 來編寫您的抓取邏輯。
更新后的項目結構應如下所示:
google-finance-scraper/
├── pyproject.toml
├── README.md
├── google_finance_scraper/
│ ├── __init__.py
│ └── main.py
└── tests/
└── __init__.py
您的環境現已設置好,您可以開始編寫 Python Playwright 代碼來抓取 Google Finance 了。
首先,讓我們使用 Playwright 啟動 Chromium 瀏覽器實例。雖然 Playwright 支持各種瀏覽器引擎,但在本教程中我們將使用 Chromium:
from playwright.async_api import async_playwright
async def main():
async with async_playwright() as playwright:
browser = await playwright.chromium.launch(headless=False) # Launch a Chromium browser
context = await browser.new_context()
page = await context.new_page()
if __name__ == "__main__":
asyncio.run(main())
要運行此腳本,您需要 main() 在腳本末尾使用事件循環執行該函數。
接下來,導航到您要抓取的股票的 Google 財經頁面。Google 財經股票頁面的 URL 格式如下:
https://www.google.com/finance/quote/{ticker_symbol}
股票 代碼 是用于識別證券交易所上市公司的唯一代碼,例如 AAPL Apple Inc. 或 TSLA Tesla, Inc.。股票代碼發生變化時,URL 也會發生變化。因此,您應將其替換 {ticker_symbol} 為要抓取的特定股票代碼。
import asyncio
from playwright.async_api import async_playwright
async def main():
async with async_playwright() as playwright:
# ...
ticker_symbol = "AAPL:NASDAQ" # Replace with the desired ticker symbol
google_finance_url = f"https://www.google.com/finance/quote/{ticker_symbol}"
await page.goto(google_finance_url) # Navigate to the Google Finance page
if __name__ == "__main__":
asyncio.run(main())
以下是迄今為止的完整腳本:
import asyncio
from playwright.async_api import async_playwright
async def main():
async with async_playwright() as playwright:
# Launch a Chromium browser
browser = await playwright.chromium.launch(headless=False)
context = await browser.new_context()
page = await context.new_page()
ticker_symbol = "AAPL:NASDAQ" # Replace with the desired ticker symbol
google_finance_url = f"https://www.google.com/finance/quote/{ticker_symbol}"
# Navigate to the Google Finance page
await page.goto(google_finance_url)
# Wait for a few seconds
await asyncio.sleep(3)
# Close the browser
await browser.close()
if __name__ == "__main__":
asyncio.run(main())
當您運行此腳本時,它將打開 Google Finance 頁面幾秒鐘后才終止。

太棒了!現在,您只需更改股票代碼即可抓取您選擇的任何股票的數據。
請注意,使用 UI ( headless=False) 啟動瀏覽器非常適合測試和調試。如果您想節省資源并在后臺運行瀏覽器,請切換到無頭模式:
browser = await playwright.chromium.launch(headless=True)
要有效地抓取數據,首先需要了解網頁的 DOM 結構。假設您要提取常規市場價格(229.79 美元)、變化(+1.46)和變化百分比(+3.30%)。這些值都包含在一個 div 元素中。

您可以使用選擇器 從 Google Finance 中div.YMlKec.fxKbKc 提取價格、 div.enJeMd div.JwB6zf 百分比變化和 span.P2Luy.ZYVHBb 價值變化。
div.YMlKec.fxKbKc
div.enJeMd div.JwB6zf
span.P2Luy.ZYVHBb
太棒了!接下來我們看看如何提取收盤時間,頁面上顯示為“06:02:19 UTC-4”。

要選擇收盤時間,請使用以下 CSS 選擇器:
//div[contains(text(), "Closed:")]
現在,讓我們繼續從表中提取關鍵的公司數據,如市值、前收盤價和交易量:

如您所見,數據在表格中是有結構的,多個 div 標簽代表每個字段,從“前一收盤價”開始到“主要交易所”結束。
您可以使用選擇器 從 Google Finance 表中.mfs7Fc 提取標簽和 .P6K39c 相應的值。這些選擇器根據元素的類名來定位元素,讓您可以成對地檢索和處理表的數據。
.mfs7Fc
.P6K39c
現在您已經確定了所需的元素,是時候編寫 Playwright 腳本來從 Google Finance 中提取數據了。
讓我們定義一個名為 的新函數 scrape_data 來處理抓取過程。此函數接受股票代碼,導航到 Google 財經頁面,并返回包含提取的財務數據的字典。
工作原理如下:
import asyncio
from playwright.async_api import async_playwright, Playwright
async def scrape_data(playwright: Playwright, ticker: str) -> dict:
financial_data = {
"ticker": ticker.split(":")[0],
"price": None,
"price_change_value": None,
"price_change_percentage": None,
"close_time": None,
"previous_close": None,
"day_range": None,
"year_range": None,
"market_cap": None,
"avg_volume": None,
"p/e_ratio": None,
"dividend_yield": None,
"primary_exchange": None,
}
try:
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context()
page = await context.new_page()
await page.goto(f"https://www.google.com/finance/quote/{ticker}")
price_element = await page.query_selector("div.YMlKec.fxKbKc")
if price_element:
price_text = await price_element.inner_text()
financial_data["price"] = price_text.replace(",", "")
percentage_element = await page.query_selector("div.enJeMd div.JwB6zf")
if percentage_element:
percentage_text = await percentage_element.inner_text()
financial_data["price_change_percentage"] = percentage_text.strip()
value_element = await page.query_selector("span.P2Luy.ZYVHBb")
if value_element:
value_text = await value_element.inner_text()
value_parts = value_text.split()
if value_parts:
financial_data["price_change_value"] = value_parts[0].replace(
"$", "")
close_time_element = await page.query_selector('//div[contains(text(), "Closed:")]')
if close_time_element:
close_time_text = await close_time_element.inner_text()
close_time = close_time_text.split(
"·")[0].replace("Closed:", "").strip()
clean_close_time = close_time.replace("\\u202f", " ")
financial_data["close_time"] = clean_close_time
label_elements = await page.query_selector_all(".mfs7Fc")
value_elements = await page.query_selector_all(".P6K39c")
for label_element, value_element in zip(label_elements, value_elements):
label = await label_element.inner_text()
value = await value_element.inner_text()
label = label.strip().lower().replace(" ", "_")
if label in financial_data:
financial_data[label] = value.strip()
except Exception as e:
print(f"An error occurred for {ticker}: {str(e)}")
finally:
await context.close()
await browser.close()
return financial_data
代碼首先導航到股票頁面并使用和提取價格和市值等各種指標 query_selector, query_selector_all這是 Playwright 常用的方法,用于根據 CSS 選擇器和 XPath 查詢從元素中選擇和獲取數據。
之后,使用字典從元素中提取文本 inner_text() 并將其存儲在字典中,其中每個鍵代表一個財務指標(例如價格、市值),每個值是相應的提取文本。最后,關閉瀏覽器會話以釋放資源。
現在, main 通過迭代每個股票行情機并收集數據來定義協調整個過程的函數。
async def main():
# Define the ticker symbol
ticker = "AAPL"
# Append ":NASDAQ" to the ticker for the Google Finance URL
ticker = f"{ticker}:NASDAQ"
async with async_playwright() as playwright:
# Collect data for the ticker
data = await scrape_data(playwright, ticker)
print(data)
# Run the main function
if __name__ == "__main__":
asyncio.run(main())
在抓取過程結束時,控制臺中會打印以下數據:

到目前為止,我們已經抓取了一只股票的數據。要從 Google Finance 一次收集多只股票的數據,我們可以修改腳本以接受股票代碼作為命令行參數并處理每只股票。確保導入模塊 sys。
import sys
async def main():
# Get ticker symbols from command line arguments
if len(sys.argv) < 2:
print("Please provide at least one ticker symbol as a command-line argument.")
sys.exit(1)
tickers = sys.argv[1:]
async with async_playwright() as playwright:
results = []
for ticker in tickers:
data = await scrape_data(playwright, f"{ticker}:NASDAQ")
results.append(data)
print(results)
# Run the main function
if __name__ == "__main__":
asyncio.run(main())
要運行腳本,請將股票代碼作為參數傳遞:
python google_finance_scraper/main.py aapl meta amzn
這將抓取并顯示 Apple、Meta 和 Amazon 的數據。

網站通常會使用速率限制、IP 阻止和分析瀏覽模式等技術來檢測和阻止自動抓取。從網站抓取數據時,采用避免檢測的策略至關重要。以下是一些有效的避免被發現的方法:
1. 請求之間的隨機間隔
降低檢測風險的一個簡單方法是在請求之間引入隨機延遲。這種簡單的技術可以大大降低被識別為自動爬蟲的可能性。
以下是如何在 Playwright 腳本中添加隨機延遲的方法:
import asyncio
import random
from playwright.async_api import Playwright, async_playwright
async def scrape_data(playwright: Playwright, ticker: str):
browser = await playwright.chromium.launch()
context = await browser.new_context()
page = await context.new_page()
url = f"https://www.google.com/finance/quote/{ticker}"
await page.goto(url)
# Random delay to mimic human-like behavior
await asyncio.sleep(random.uniform(2, 5))
# Your scraping logic here...
await context.close()
await browser.close()
async def main():
async with async_playwright() as playwright:
await scrape_data(playwright, "AAPL:NASDAQ")
if __name__ == "__main__":
asyncio.run(main())
該腳本在請求之間引入了 2 到 5 秒的隨機延遲,使得操作變得不那么可預測,并降低了被標記為機器人的可能性。
2. 設置和切換用戶代理
網站通常使用 User-Agent 字符串來識別每個請求背后的瀏覽器和設備。通過輪換 User-Agent 字符串,您可以讓您的抓取請求看起來來自不同的瀏覽器和設備,從而幫助您避免被檢測到。
以下是在 Playwright 中實現 User-Agent 輪換的方法:
import asyncio
import random
from playwright.async_api import Playwright, async_playwright
user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0",
]
async def scrape_data(playwright: Playwright, ticker: str) -> None:
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context(user_agent=random.choice(user_agents))
page = await context.new_page()
url = f"https://www.google.com/finance/quote/{ticker}"
await page.goto(url)
# Your scraping logic goes here...
await context.close()
await browser.close()
async def main():
async with async_playwright() as playwright:
await scrape_data(playwright, "AAPL:NASDAQ")
if __name__ == "__main__":
asyncio.run(main())
此方法使用 User-Agent 字符串列表,并為每個請求隨機選擇一個。此技術有助于掩蓋您的抓取工具的身份并降低被阻止的可能性。
??*注意*:您可以參考 useragentstring.com 等網站來獲取完整的 User-Agent 字符串列表。
3. 使用playwright-stealth
為了進一步最大限度地減少檢測并增強您的抓取工作,您可以使用 playwright-stealth 庫,它應用各種技術使您的抓取活動看起來像真實用戶的活動。
首先,安裝 playwright-stealth:
poetry add playwright-stealth
如果遇到 ,很可能是因為 未安裝該軟件包。要解決此問題,還需要 ModuleNotFoundError 安裝 :pkg_resources`setuptools`setuptools
poetry add setuptools
然后,修改腳本:
import asyncio
from playwright.async_api import Playwright, async_playwright
from playwright_stealth import stealth_async
async def scrape_data(playwright: Playwright, ticker: str) -> None:
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context()
# Apply stealth techniques to avoid detection
await stealth_async(context)
page = await context.new_page()
url = f"https://www.google.com/finance/quote/{ticker}"
await page.goto(url)
# Your scraping logic here...
await context.close()
await browser.close()
async def main():
async with async_playwright() as playwright:
await scrape_data(playwright, "AAPL:NASDAQ")
if __name__ == "__main__":
asyncio.run(main())
這些技術可以幫助避免被阻止,但您可能仍會遇到問題。如果是這樣,請嘗試更高級的方法,例如使用代理、輪換 IP 地址或實施 CAPTCHA 求解器。
抓取到所需的股票數據后,下一步就是將其導出為 CSV 文件,以便于分析、與他人共享或導入到其他數據處理工具中。
將提取的數據保存到 CSV 文件的方法如下:
# ...
import csv
async def main() -> None:
# ...
async with async_playwright() as playwright:
# Collect data for all tickers
results = []
for ticker in tickers:
data = await scrape_data(playwright, ticker)
results.append(data)
# Define the CSV file name
csv_file = "financial_data.csv"
# Write data to CSV
with open(csv_file, mode="w", newline="") as file:
writer = csv.DictWriter(file, fieldnames=results[0].keys())
writer.writeheader()
writer.writerows(results)
if __name__ == "__main__":
asyncio.run(main())
代碼首先收集每個股票代碼的數據。之后,它會創建一個名為的 CSV 文件 financial_data.csv。然后,它使用 Python 的 csv.DictWriter 方法寫入數據,首先使用方法寫入列標題 writeheader() ,然后使用方法添加每行數據 writerows() 。
讓我們將所有內容整合到一個腳本中。這個最終代碼片段包括從 Google Finance 抓取數據到將其導出到 CSV 文件的所有步驟。
import asyncio
import sys
import csv
from playwright.async_api import async_playwright, Playwright
async def scrape_data(playwright: Playwright, ticker: str) -> dict:
"""
Scrape financial data for a given stock ticker from Google Finance.
Args:
playwright (Playwright): The Playwright instance.
ticker (str): The stock ticker symbol.
Returns:
dict: A dictionary containing the scraped financial data.
"""
financial_data = {
"ticker": ticker.split(":")[0],
"price": None,
"price_change_value": None,
"price_change_percentage": None,
"close_time": None,
"previous_close": None,
"day_range": None,
"year_range": None,
"market_cap": None,
"avg_volume": None,
"p/e_ratio": None,
"dividend_yield": None,
"primary_exchange": None,
}
try:
# Launch the browser and navigate to the Google Finance page for the ticker
browser = await playwright.chromium.launch(headless=True)
context = await browser.new_context()
page = await context.new_page()
await page.goto(f"https://www.google.com/finance/quote/{ticker}")
# Scrape current price
price_element = await page.query_selector("div.YMlKec.fxKbKc")
if price_element:
price_text = await price_element.inner_text()
financial_data["price"] = price_text.replace(",", "")
# Scrape price change percentage
percentage_element = await page.query_selector("div.enJeMd div.JwB6zf")
if percentage_element:
percentage_text = await percentage_element.inner_text()
financial_data["price_change_percentage"] = percentage_text.strip()
# Scrape price change value
value_element = await page.query_selector("span.P2Luy.ZYVHBb")
if value_element:
value_text = await value_element.inner_text()
value_parts = value_text.split()
if value_parts:
financial_data["price_change_value"] = value_parts[0].replace(
"$", "")
# Scrape close time
close_time_element = await page.query_selector('//div[contains(text(), "Closed:")]')
if close_time_element:
close_time_text = await close_time_element.inner_text()
close_time = close_time_text.split(
"·")[0].replace("Closed:", "").strip()
clean_close_time = close_time.replace("\\u202f", " ")
financial_data["close_time"] = clean_close_time
# Scrape additional financial data
label_elements = await page.query_selector_all(".mfs7Fc")
value_elements = await page.query_selector_all(".P6K39c")
for label_element, value_element in zip(label_elements, value_elements):
label = await label_element.inner_text()
value = await value_element.inner_text()
label = label.strip().lower().replace(" ", "_")
if label in financial_data:
financial_data[label] = value.strip()
except Exception as e:
print(f"An error occurred for {ticker}: {str(e)}")
finally:
# Ensure browser is closed even if an exception occurs
await context.close()
await browser.close()
return financial_data
async def main():
"""
Main function to scrape financial data for multiple stock tickers and save to CSV.
"""
# Get ticker symbols from command line arguments
if len(sys.argv) < 2:
print("Please provide at least one ticker symbol as a command-line argument.")
sys.exit(1)
tickers = sys.argv[1:]
async with async_playwright() as playwright:
results = []
for ticker in tickers:
data = await scrape_data(playwright, f"{ticker}:NASDAQ")
results.append(data)
# Define CSV file name
csv_file = "financial_data.csv"
# Write data to CSV
with open(csv_file, mode="w", newline="") as file:
writer = csv.DictWriter(file, fieldnames=results[0].keys())
writer.writeheader()
writer.writerows(results)
print(f"Data exported to {csv_file}")
# Run the main function
if __name__ == "__main__":
asyncio.run(main())
您可以通過提供一個或多個股票代碼作為命令行參數從終端運行腳本。
python google_finance_scraper/main.py AAPL META AMZN TSLA
運行腳本后, financial_data.csv 將在同一目錄中創建名為的 CSV 文件。此文件將以有組織的方式包含所有數據。CSV 文件將如下所示:

準備好抓取工具后,就可以使用Apify將其部署到云端 。這樣您就可以按計劃運行抓取工具并利用 Apify 的強大功能。對于此任務,我們將使用 Python Playwright 模板 進行快速設置。在 Apify 上,抓取工具稱為 Actors。
首先從 Apify Python 模板庫克隆 Playwright + Chrome模板。
首先,您需要 安裝 Apify CLI,它將幫助您管理 Actor。在 macOS 或 Linux 上,您可以使用 Homebrew 執行此操作:
brew install apify-cli
或者通過 NPM:
npm -g install apify-cli
安裝 CLI 后,使用 Python Playwright + Chrome 模板創建一個新的 Actor:
apify create gf-scraper -t python-playwright
此命令將在您的目錄中設置一個項目 gf-scraper 。它會安裝所有必要的依賴項并提供一些樣板代碼來幫助您入門。
導航到新項目文件夾并使用您喜歡的代碼編輯器將其打開。在此示例中,我使用的是 VS Code:
cd gf-scraper
code .
該模板附帶功能齊全的抓取工具。您可以通過運行命令來測試它, apify run 以查看其運行情況。結果將保存在 中 storage/datasets。
接下來,修改代碼 src/main.py 以使其適合抓取 Google Finance。
修改后的代碼如下:
from playwright.async_api import async_playwright
from apify import Actor
async def extract_stock_data(page, ticker):
financial_data = {
"ticker": ticker.split(":")[0],
"price": None,
"price_change_value": None,
"price_change_percentage": None,
"close_time": None,
"previous_close": None,
"day_range": None,
"year_range": None,
"market_cap": None,
"avg_volume": None,
"p/e_ratio": None,
"dividend_yield": None,
"primary_exchange": None,
}
# Scrape current price
price_element = await page.query_selector("div.YMlKec.fxKbKc")
if price_element:
price_text = await price_element.inner_text()
financial_data["price"] = price_text.replace(",", "")
# Scrape price change percentage
percentage_element = await page.query_selector("div.enJeMd div.JwB6zf")
if percentage_element:
percentage_text = await percentage_element.inner_text()
financial_data["price_change_percentage"] = percentage_text.strip()
# Scrape price change value
value_element = await page.query_selector("span.P2Luy.ZYVHBb")
if value_element:
value_text = await value_element.inner_text()
value_parts = value_text.split()
if value_parts:
financial_data["price_change_value"] = value_parts[0].replace(
"$", "")
# Scrape close time
close_time_element = await page.query_selector('//div[contains(text(), "Closed:")]')
if close_time_element:
close_time_text = await close_time_element.inner_text()
close_time = close_time_text.split(
"·")[0].replace("Closed:", "").strip()
clean_close_time = close_time.replace("\\u202f", " ")
financial_data["close_time"] = clean_close_time
# Scrape additional financial data
label_elements = await page.query_selector_all(".mfs7Fc")
value_elements = await page.query_selector_all(".P6K39c")
for label_element, value_element in zip(label_elements, value_elements):
label = await label_element.inner_text()
value = await value_element.inner_text()
label = label.strip().lower().replace(" ", "_")
if label in financial_data:
financial_data[label] = value.strip()
return financial_data
async def main() -> None:
"""
Main function to run the Apify Actor and extract stock data using Playwright.
Reads input configuration from the Actor, enqueues URLs for scraping,
launches Playwright to process requests, and extracts stock data.
"""
async with Actor:
# Retrieve input parameters
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get("start_urls", [])
tickers = actor_input.get("tickers", [])
if not start_urls:
Actor.log.info(
"No start URLs specified in actor input. Exiting...")
await Actor.exit()
base_url = start_urls[0].get("url", "")
# Enqueue requests for each ticker
default_queue = await Actor.open_request_queue()
for ticker in tickers:
url = f"{base_url}{ticker}:NASDAQ"
await default_queue.add_request(url)
# Launch Playwright and open a new browser context
Actor.log.info("Launching Playwright...")
async with async_playwright() as playwright:
browser = await playwright.chromium.launch(headless=Actor.config.headless)
context = await browser.new_context()
# Process requests from the queue
while request := await default_queue.fetch_next_request():
url = (
request.url
) # Use attribute access instead of dictionary-style access
Actor.log.info(f"Scraping {url} ...")
try:
# Open the URL in a new Playwright page
page = await context.new_page()
await page.goto(url, wait_until="domcontentloaded")
# Extract the ticker symbol from the URL
ticker = url.rsplit("/", 1)[-1]
data = await extract_stock_data(page, ticker)
# Push the extracted data to Apify
await Actor.push_data(data)
except Exception as e:
Actor.log.exception(
f"Error extracting data from {url}: {e}")
finally:
# Ensure the page is closed and the request is marked as handled
await page.close()
await default_queue.mark_request_as_handled(request)
在運行代碼之前,更新 目錄input_schema.json 中的文件 .actor/ 以包含 Google Finance 報價頁面 URL 并添加一個 tickers 字段。
這是更新后的 input_schema.json 文件:
{
"title": "Python Playwright Scraper",
"type": "object",
"schemaVersion": 1,
"properties": {
"start_urls": {
"title": "Start URLs",
"type": "array",
"description": "URLs to start with",
"prefill": [
{
"url": "https://www.google.com/finance/quote/"
}
],
"editor": "requestListSources"
},
"tickers": {
"title": "Tickers",
"type": "array",
"description": "List of stock ticker symbols to scrape data for",
"items": {
"type": "string"
},
"prefill": [
"AAPL",
"GOOGL",
"AMZN"
],
"editor": "stringList"
},
"max_depth": {
"title": "Maximum depth",
"type": "integer",
"description": "Depth to which to scrape to",
"default": 1
}
},
"required": [
"start_urls",
"tickers"
]
}
此外, input.json 通過將 URL 更改為 Google Finance 頁面來更新文件,以防止執行期間發生沖突,或者您可以直接刪除此文件。

要運行你的 Actor,請在終端中運行以下命令:
apify run
抓取的結果將保存在 中 storage/datasets,其中每個股票代碼都有自己的 JSON 文件,如下所示:

要部署您的 Actor,請先 創建一個 Apify 帳戶 (如果您還沒有)。然后,從 Apify 控制臺的 “設置 → 集成”下獲取您的 API 令牌,最后使用以下命令使用您的令牌登錄:
apify login -t YOUR_APIFY_TOKEN
最后,將您的 Actor 推送到 Apify:
apify push
片刻之后,你的 Actor 應該會出現在 Apify 控制臺的 Actors → My actors下。

您的抓取工具現已準備好在 Apify 平臺上運行。點擊“開始”按鈕即可開始。運行完成后,您可以從“存儲”選項卡預覽和下載各種格式的數據。

額外好處: 在 Apify 上運行抓取工具的一個主要優勢是可以為同一個 Actor 保存不同的配置并設置自動調度。讓我們為我們的 Playwright Actor 設置這個。
在Actor頁面上,點擊 創建空任務。

接下來,單擊 “操作” ,然后 單擊“計劃”。

最后,選擇你希望 Actor 運行的頻率并點擊 “創建”。

完美!您的 Actor 現已設置為在您指定的時間自動運行。您可以在Apify 平臺的“計劃”選項卡中查看和管理所有計劃的運行。
要開始在 Apify 平臺上使用 Python 進行抓取,您可以使用 Python 代碼模板。這些模板適用于流行的庫,例如 Requests、Beautiful Soup、Scrapy、Playwright 和 Selenium。使用這些模板,您可以快速構建用于各種 Web 抓取任務的抓取工具。
使用代碼模板快速構建抓取工具
不, Google Finance 沒有公開的 API。雖然它曾經有一個,但它在 2012 年被棄用了。從那時起,Google 就沒有發布新的公開 API 來通過 Google Finance 訪問財務數據。
您已經學會了如何使用 Playwright 與 Google Finance 進行交互并提取有價值的財務數據。您還探索了避免被阻止的方法,并構建了一個代碼解決方案,您只需傳遞股票代碼(一個或多個),所有所需數據都會存儲在一個 CSV 文件中。此外,您現在對如何使用 Apify 平臺 及其 Actor 框架 來構建可擴展的 Web 抓取工具以及安排抓取工具在最方便的時間運行有了深入的了解。
文章來源:https://blog.apify.com/scrape-google-finance-python/#does-google-finance-allow-scraping