代码如下:
#注意:本电脑环境是Python 3.7 #下面是导入相应模块 import requests #导入requests库 from bs4 import BeautifulSoup #导入解析库 import pandas as pd #下面是网页请求 url="http://q.stock.sohu.com/" #设置请求网址为搜索网址 response=requests.get(url) #对搜狐网站就行get请求并将请求结果赋值给response response.encoding='utf-8' #设置编码为utf-8格式的 html=response.text #获取网页的html源代码并赋值给html #下面是网页解析 soup=BeautifulSoup(html,'lxml') #将lxml解析为html content=soup.findAll('a') #查找所有的a标签内容并赋值给content for aa in content: #遍历查到的的a标签内容 print(aa.get('href')) #获取a href后面的网址,并打印出来 #下面是保存数据 df=pd.DataFrame(content,columns=["网址"]) #设置列标为网址,单元格数据为content内容 df.to_Excel("搜索a标签内容.xlsx") #将df数据存入搜索a标签内容.xlsx中
运行结果如下:
/
//s.m.sohu.com/t/index.html
//q.stock.sohu.com/feedback.html
//q.stock.sohu.com/cn/mystock.shtml
//q.stock.sohu.com/cn/bk.shtml
//q.stock.sohu.com/cn/ph.shtml
//q.stock.sohu.com/cn/zs.shtml
//q.stock.sohu.com/fundflow/
/sdk/rank
//stock.sohu.com/ipo/
//q.stock.sohu.com/App2/bigdeal2.jsp
//q.stock.sohu.com/app2/rpsholder.up
//q.stock.sohu.com/app2/mpssTrade.up
//stock.sohu.com/s2011/jlp/
//q.fund.sohu.com/jzph/zxjz_date_up.shtml
//q.stock.sohu.com/us/zgg.html
JAVAscript:void(0);
/sdk/transfer?page=callin
/sdk/transfer?page=callin
/sdk/transfer?page=callout
/sdk/transfer?page=cancel
/sdk/transfer?page=record
//mp.sohu.com
JavaScript:void(0);
javascript:void(0);
javascript:void(0);
//q.stock.sohu.com/cn/ph_m.shtml?type=sh_as&field=changerate&sort=up
//q.stock.sohu.com/cn/ph_m.shtml?type=sz_as&field=changerate&sort=up
//q.stock.sohu.com/cn/bk.shtml
//q.stock.sohu.com/cn/bk.shtml
//q.stock.sohu.com/cn/bk.shtml
//q.stock.sohu.com/cn/bk.shtml
javascript:void(0);
javascript:void(0);
/sdk/rank
//q.stock.sohu.com/cn/mystock.shtml
javascript:void(0);
//q.stock.sohu.com/fundflow/stock_inflow.html?name=NetVal&io=In
//q.stock.sohu.com/fundflow/stock_inflow.html?name=NetVal&io=Out
//q.stock.sohu.com/app2/mpssTrade.up
//q.stock.sohu.com/app2/mpssTrade.up
//q.stock.sohu.com/app2/bigdeal2.jsp
图片示例如下: