spidr
1.0.0
Spidr是一个多功能的红宝石网络蜘蛛库,可以蜘蛛站点,多个域,某些链接或无限的链接。 SPIDR设计为快速易于使用。
a标签。iframe标签。frame标签。/robots.txt支持。从URL开始蜘蛛:
Spidr . start_at ( 'http://tenderlovemaking.com/' ) do | agent |
# ...
end蜘蛛宿主:
Spidr . host ( 'solnic.eu' ) do | agent |
# ...
end蜘蛛域(以及任何子域):
Spidr . domain ( 'ruby-lang.org' ) do | agent |
# ...
end蜘蛛站点:
Spidr . site ( 'http://www.rubyflow.com/' ) do | agent |
# ...
end蜘蛛多个主机:
Spidr . start_at ( 'http://company.com/' , hosts : [ 'company.com' , /host[ d ]+ . company . com/ ] ) do | agent |
# ...
end不要蜘蛛某些链接:
Spidr . site ( 'http://company.com/' , ignore_links : [ %{^/blog/} ] ) do | agent |
# ...
end请勿在某些端口上蜘蛛链接:
Spidr . site ( 'http://company.com/' , ignore_ports : [ 8000 , 8010 , 8080 ] ) do | agent |
# ...
end请勿在Robots.txt中列入黑名单的蜘蛛链接:
Spidr . site ( 'http://company.com/' , robots : true ) do | agent |
# ...
end打印出访问的URL:
Spidr . site ( 'http://www.rubyinside.com/' ) do | spider |
spider . every_url { | url | puts url }
end构建网站的URL图:
url_map = Hash . new { | hash , key | hash [ key ] = [ ] }
Spidr . site ( 'http://intranet.com/' ) do | spider |
spider . every_link do | origin , dest |
url_map [ dest ] << origin
end
end打印出无需要求的URL:
Spidr . site ( 'http://company.com/' ) do | spider |
spider . every_failed_url { | url | puts url }
end找到所有断开链接的页面:
url_map = Hash . new { | hash , key | hash [ key ] = [ ] }
spider = Spidr . site ( 'http://intranet.com/' ) do | spider |
spider . every_link do | origin , dest |
url_map [ dest ] << origin
end
end
spider . failures . each do | url |
puts "Broken link #{ url } found in:"
url_map [ url ] . each { | page | puts " #{ page } " }
end搜索HTML和XML页面:
Spidr . site ( 'http://company.com/' ) do | spider |
spider . every_page do | page |
puts ">>> #{ page . url } "
page . search ( '//meta' ) . each do | meta |
name = ( meta . attributes [ 'name' ] || meta . attributes [ 'http-equiv' ] )
value = meta . attributes [ 'content' ]
puts " #{ name } = #{ value } "
end
end
end从每个页面打印出标题:
Spidr . site ( 'https://www.ruby-lang.org/' ) do | spider |
spider . every_html_page do | page |
puts page . title
end
end打印出每个HTTP重定向:
Spidr . host ( 'company.com' ) do | spider |
spider . every_redirect_page do | page |
puts " #{ page . url } -> #{ page . headers [ 'Location' ] } "
end
end通过访问标题:查找主机正在使用的哪种Web服务器:
servers = Set [ ]
Spidr . host ( 'company.com' ) do | spider |
spider . all_headers do | headers |
servers << headers [ 'server' ]
end
end在禁止页面上暂停蜘蛛:
Spidr . host ( 'company.com' ) do | spider |
spider . every_forbidden_page do | page |
spider . pause!
end
end跳过页面的处理:
Spidr . host ( 'company.com' ) do | spider |
spider . every_missing_page do | page |
spider . skip_page!
end
end跳过链接的处理:
Spidr . host ( 'company.com' ) do | spider |
spider . every_url do | url |
if url . path . split ( '/' ) . find { | dir | dir . to_i > 1000 }
spider . skip_link!
end
end
end $ gem install spidr有关许可信息,请参见{file:linese.txt}。