Bug 762209 (GLUSTER-477)

Summary: Allow search engines to crawl Bugzilla
Product: [Community] GlusterFS Reporter: Hraban Luyat <bubblboy>
Component: unclassifiedAssignee: Vijay Bellur <vbellur>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: aavati, gluster-bugs, lakshmipathi
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
URL: http://bugs.gluster.com/robots.txt
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTNR Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Hraban Luyat 2009-12-16 05:19:40 UTC
I just tried to search for a bug report that I created a while ago to see if other people having the same issue would be able to find it through a public search engine, but it does not turn up anywhere. Apparently, the robots.txt file blocks all spiders, as per the default Bugzilla setting:

User-agent: *
Allow: /index.cgi
Disallow: /

In my opinion, allowing public search engines to crawl the bug reports could be of a great benefit to people looking for help with a particular bug (and to people who do not like the Bugzilla search engine... ;)). I would propose allowing access to show_bug.cgi and to a page that lists all (at least active) bugs.

Comment 1 Anand Avati 2010-01-23 18:06:04 UTC
http://bugs.gluster.com/robots.txt reflects the requested change