Bug 1019610 - RFE - Add a 'pre-QA' report to the doc builder
RFE - Add a 'pre-QA' report to the doc builder
Status: NEW
Product: PressGang CCMS
Classification: Community
Component: Web-UI (Show other bugs)
1.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: pressgang-ccms-dev
:
Depends On:
Blocks: 1013887
  Show dependency treegraph
 
Reported: 2013-10-16 03:11 EDT by Tim Hildred
Modified: 2014-08-04 18:29 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Tim Hildred 2013-10-16 03:11:58 EDT
Description of problem:

Writers (i.e. me) often sent books with spelling mistakes to ON_QA. I know there is a spell checker in the topic editor. What would be awesome though, is a test/report that could be run over the whole book prior to rhpkg publican build, that would catch any outstanding issues. 

If it started with report that included a book wide spell check, that would be excellent, and reduce the number of times bugs failed_qa. 

The workflow I have in mind is something like:

- finish editing topics, about to build book to d-d.e.r.c to get some bugs verified.
- go to docbuilder, hit "Rebuild book"
- find the "Pre-QA Report" button, hit it.
- wait, while a test harness is run on every topic in the content map (test harness to start with is spell-checker)
- spits out a link to a report which is valid for a topic map revision and the topic-revisions that were present at that time
- report contains a list of topics with spelling errors, the errors, and links to open them.
- report also contains some kind of unique identifier string that can be understood to mean pass or fail,and to recall the spec and topic versions it is valid for.

Then, the "pre-QA report" test harness could be expanded to address other issues that QE comes up against regularly. A grammar checker comes to mind next, to catch the double words (the the, etc). Also, a tag consistency checker.

Then, the report could be expanded further to provide visibility on other information in our books. 

For example, I know that Pressgang knows what passive voice is, and can highlight it in a topic. You could add an optional passive voice check that would add another section to the report called "Passive Voice", that has a list of all examples of passive voice in a given book, with links to the topics that contain them.

Here is a good list of stuff it might be really good to know about a book, or a suite of books, that could be added as part of the test harness that gets applied at the book level, rather than the topic level.
http://www.afterthedeadline.com/features.slp

This is not to say that all of these issues cause a "failed_qa", but imagine this: you take a list of content maps (i.e. docs suite), and dump them into the report. You run the test harness on all the content maps, and after a while, you can see, for example, that
- we used the word "Thus" in 14 topics in 4 books, 
- we used "however" 17 times in 2 books,
- we tagged "Volumes" with <guilabel> 6 times, and with <guibutton> 16 times,
- There are 5 instances of passive voice in 1 book, and 40 in another,
- and so on. 


But, start with a pre-QE book-wide spell checker.
Comment 1 Tim Hildred 2013-10-16 03:15:09 EDT
Expansion of the workflow

When you hit "Pre-QE Report" button, you get a series of check boxes, one for each test.

At first, there is only on, "spellcheck". Then, as more tests get added, you'd get more checkboxes. That way, you would only run the tests you wanted reporting on. 

Eventually, you might even have a "custom test" button, that would allow people to run scripts against content (in a chrooted environment).

Or something.
Comment 2 Matthew Casperson 2013-10-16 17:00:14 EDT
Are there scripts that are currently run by QE?

What I would like to work towards in a kind of "lint" type suite of standardised checks. Ideally these can be run in the browser as someone is editing a topic or content spec, and then also as a standalone application that can be run by groups like QE.
Comment 3 Tim Hildred 2013-10-16 22:08:10 EDT
I don't think they have anything as robust as is required.

I'm looking at this as a 3 level edit.

1) Topic-level spelling and usage suggestions by computer
2) Book-level spelling and usage report by computer
3) Book-level process verification and sanity check by a person (docs QE)

The problem with leaving it at a topic level check, is that a misspelled word gets highlighted, but can still be saved and published.

Adding a book level, completely automated report gives that second layer of check when the high pressure moment of "write this topic now" has passed. A writer can (doc)build(er) their book, get the report, fix the book, re-(doc)build(er) their book, see that the report shows that the book contains zero typos, and then package it for QE.
Comment 4 Matthew Casperson 2013-11-24 23:34:57 EST
Spell checking is now performed across entire books.
Doubled words are now detected across entire books.
Style guide words and phrases are highlighted with different colored links:
  * Green means the word is valid, but may still have some additional information about its usage
  * Red means the word is invalid
  * Purple means the word may need to be changed as it is something that is potentially discouraged for use in technical documentation

Note You need to log in before you can comment on or make changes to this bug.