close

Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Showing posts with label sphinx. Show all posts
Showing posts with label sphinx. Show all posts

Tuesday, August 9, 2022

Tragedy Averted

I almost made a terrible blunder.

See https://github.com/slott56/py-web-tool for some background. This is a "Literate Programming" tool. I started fooling around with this kind of thing back in '05 (maybe even earlier.) This is not the blunder. The whole idea of literate programming is not very popular. I'm a fan of Jupyter{Book} as the state of the art in sophisticated literate programming, if you're interested in it.

In my case, I started this project so long ago, I used docutils. This was long before Sphinx arrived on the scene. I never updated my little project to use Sphinx. The point was to have a kind of pure literate programming tool that could work with a variety of markup languages, including (but not limited to) RST.

Recently, I learned about PlantUML. The idea of a text description of a diagram is appealing. I don't really need to draw it; I just need to specify what's in it and let graphviz do the rest. This tool is very, very cool. You can capture ideas quickly. You can refine and expand on ideas until you reach a point where code makes more sense than a picture of code. 

For some things, you can gather data and draw a picture of things *as they are*. This is particularly valuable for cloud-based infrastructure where a few queries leads to PlantUML source that is depicted very nicely.

Which leads to the idea of Literate Programming including UML diagrams. 

Doesn't sound too difficult. I can create an extension to docutils to introduce a UML directive. The resulting RST would look like this:

..  uml::

    left to right direction
    skinparam actorStyle awesome

    actor "Developer" as Dev
    rectangle PyWeb {
        usecase "Tangle Source" as UC_Tangle
        usecase "Weave Document" as UC_Weave
    }
    rectangle IDE {
        usecase "Create WEB" as UC_Create
        usecase "Run Tests" as UC_Test
    }
    Dev --> UC_Tangle
    Dev --> UC_Weave
    Dev --> UC_Create
    Dev --> UC_Test

    UC_Test --> UC_Tangle

This could be handy to have the diagrams as part of the documentation that tangles the working the code. One source for all of it. 

I started down the path of researching docutils extensions. Got pretty far. Far enough that I had an empty repository and everything. I was about ready to start creating spike solutions.

Then.

[music cue] *duh duh duuuuuuh*

I found that Sphinx already has an extension for PlantUML. I almost started reading the code to see how it worked.

Then I realized how dumb that was. It already works. Why read the code? Why not install it?

I had a choice to make.

  1. Continue building my own docutils plug-in.
  2. Switch to Sphinx.

Some complications:

  • My Literate Programming tool produces RST that *may* not be compatible with Sphinx.
  • It's yet another dependency in a tool that started out with zero dependencies. I've added pytest and tox. What next? 

What to do?

I have to say that Git is amazing. I can make a branch for the spike. If it works, pull request. If it doesn't work, delete the branch. This continues to be game-changing to me. I'm old. I remember when we had to back up the whole project directory tree before making this kind of change.

It worked. My tool's RST (with one exception) worked perfectly with Sphinx. The one exception was an obscure directive, .. class:: name, used to provide an HTML class name for the following block. This always should have been the docutils .. container:: name directive. With this fix, we're good to go.

I'm happy I avoided the trap of reimplementing something. Instead of that, I upgraded from "bare" docutils with my own CSS to Sphinx with it's sophisticated templates and HTML Themes.

Tuesday, January 3, 2017

The "Build Script" Idea

In compiled languages, the build script or makefile is pretty important. Java has maven (and gradle and ant) for this job.

Python doesn't really have much for this. Mostly because it's needless.

However.

Some folks like the idea of a build script. I've been asked for suggestions.

First and foremost: Go Slow. A build script is not essential. It's barely even helpful. Python isn't Java. There's no maven/gradle/ant nonsense because it isn't necessary. Make is a poor choice of tools for reasons we'll see below.

For folks new to Python, here's the step that's sometimes important.

python setup.py sdist bdist_wheel upload

This uses the source distribution tools (sdist) to build a "wheel" out of the source code. That's the only thing that's important, and even that's optional. The source is all that really exists, and a Git Pull is the only thing that's truly required.

Really. There's no compilation, and there's no reason to do any processing prior to uploading source.

For folks experienced with Python, this may be obvious. For folks not so experienced, it's difficult to emphasize enough that Python is just source. No "class" files. No "jar" files. No "war" files. No "ear" files. None of that. A wheel is a Zip archive that follows some simple conventions.

Some Preliminary Steps

A modicum of care is a good idea before simply uploading something. There are a few steps that make some sense.

  1. Run pylint to check for obvious code problems. A low pylint score indicates that the code needs to be cleaned up. There's no magically ideal number, but with a few judicious "disable" comments, it's easy to get to 10.00.
  2. Run mypy to check the type hints. If mypy complains, you've got potentially serious problems.
  3. Run py.test and get a coverage report. There's no magically perfect test coverage number: more is better. Even 100% line-of-code coverage doesn't necessarily mean that all of the potential combinations of logic paths have been covered.
  4. Run sphinx to create documentation.
Only py.test has a simple pass-fail aspect. If the unit tests don't pass: that's a clear problem. 

The Script

Using make doesn't work out terribly well. It can be used, but it seems to me to be too confusing to set up properly.

Why? Because we don't have the kind of simple file relationships with which make works out so nicely. If we had simple *.c -> *.o -> *.ar kinds of relationships, make would be perfect. We don't have that, and this seems to make make more trouble than it's worth.  Both pylint and py.test keep history as well as produce reports. Sphinx is make-like already, which is why I'm leery of layering on the complexity.

My preference is something like this:

import pytest
from pylint import epylint as lint
import sphinx
from mypy.api import api

(pylint_stdout, pylint_stderr) = lint.py_run('*.py', return_std=True)
print(pylint_stdout.getvalue())

result = mypy.api.run('*.py')

pytest.main(["futurize_both/tests"])

sphinx.main(['source', 'build/html', '-b', 'singlehtml'])

The point here is to simply run the four tools and then look at the output to see what needs to be fixed. Circumstances will dictate changes to the parameters being used. New features will need different reports than bug fixes. Some parts of a project will have different focus than other parts. Conversion from Python 2 to Python 3 will indicate a shift in focus, also.

The idea of a one-size-fits-all script seems inappropriate. These tools are sophisticated. Each has a distinctive feature set. Tweaking the parameters by editing the build script seems like a simple, flexible solution. I'm not comfortable defining parameter-parsing options for this, since each project I work on seems to be unique.

Important. Right now, mypy-lang in the PyPI repository and mypy in GitHub differ. The GitHub version includes an api module; the PyPI release does not include this. This script may not work for you, depending on which mypy release you're using. This will change in the future, making things nicer. Until then, you may want to run mypy "the hard way" using subprocess.check_call().

In enterprise software development environments, it can make sense to set some thresholds for pylint and pytest coverage. It is very helpful to include type hints everywhere, also. In this context, it might make sense to parse the output from lint, mypy, and py.test to stop processing if some quality thresholds are met.

As noted above: Go Slow. This kind of tool automation isn't required and might actually be harmful if done badly. Arguing over pylint metrics isn't as helpful as writing unit test cases. I worry about teams developing an inappropriate focus on pylint or coverage reports -- and the associated numerology -- to the exclusion of sensible automated testing.

I think tools like https://pypi.python.org/pypi/pytest-bdd might be of more value than a simplistic "automated" tool chain. Automation doesn't seem as helpful as clarity in test design. I like the BDD idea with Gherkin test specifications because the Given-When-Then story outline seems to be very helpful for test design.

Tuesday, August 9, 2016

That Feeling When... You're reading your own documentation because it's useful and (mostly) correct

I'm looking at code (as a man does) and I can't remember if there's a class that does X. There's a lot of code. I wrote almost all of it. And -- maybe it's the gin -- but I just can't recall if there's an X. It seems like there should be.

Scan. Scan. Scroll. Scroll.

Read. Read.

Wait!

I have a pretty good gh-pages branch for this. Sphinx-based. Mostly up-to-date. Let's look there.

Ahhh. So much nicer than scrolling through code. Indexes work.

This whole "documentation" thing is pretty cool. Now I'm actually happy that other people guilted me into doing it.

Tuesday, June 23, 2015

Literate Programming and GitHub

I remain captivated by the ideals of Literate Programming. My fork of PyLit (https://github.com/slott56/PyLit-3) coupled with Sphinx seems to handle LP programming in a very elegant way.

It works like this.
  1. Write RST files describing the problem and the solution. This includes the actual implementation code. And everything else that's relevant. 
  2. Run PyLit3 to build final Python code from the RST documentation. This should include the setup.py so that it can be installed properly. 
  3. Run Sphinx to build pretty HTML pages (and LaTeX) from the RST documentation.
I often include the unit tests along with the sphinx build so that I'm sure that things are working.

The challenge is final presentation of the whole package.

The HTML can be easy to publish, but it can't (trivially) be used to recover the code. We have to upload two separate and distinct things. (We could use BeautifulSoup to recover RST from HTML and then PyLit to rebuild the code. But that sounds crazy.)

The RST is easy to publish, but hard to read and it requires a pass with PyLit to emit the code and then another pass with Sphinx to produce the HTML. A single upload doesn't work well.

If we publish only the Python code we've defeated the point of literate programming. Even if we focus on the Python, we need to do a separate upload of HTML to providing the supporting documentation.

After working with this for a while, I've found that it's simplest to have one source and several targets. I use RST ⇒ (.py, .html, .tex). This encourages me to write documentation first. I often fail, and have blocks of code with tiny summaries and non-existent explanations.

PyLit allows one to use .py ⇒ .rst ⇒ .html, .tex. I've messed with this a bit and don't like it as much. Code first leaves the documentation as a kind of afterthought.

How can we publish simply and cleanly: without separate uploads?

Enter GitHub and gh-pages.

See the "sphinxdoc-test" project for an example. Also this https://github.com/daler/sphinxdoc-test. The bulk of this is useful advice on a way to create the gh-pages branch from your RST source via Sphinx and some GitHub commands.

Following this line of thinking, we almost have the case for three branches in a LP project.
  1. The "master" branch with the RST source. And nothing more.
  2. The "code" branch with the generated Python code created by PyLit.
  3. The "gh-pages" branch with the generated HTML created by Sphinx.
I think I like this.

We need three top-level directories. One has RST source. A build script would run PyLit to populate the (separate) directory for the code branch. The build script would also run Sphinx to populate a third top-level directory for the gh-pages branch.

The downside of this shows up when you need to create a branch for a separate effort. You have a "some-major-change" branch to master. Where's the code? Where's the doco? You don't want to commit either of those derived work products until you merge the "some-major-change" back into master.

GitHub Literate Programming

There are many LP projects on GitHub. There are perhaps a dozen which focus on publishing with the Github-flavored Markdown as the source language. Because Markdown is about as easy to parse as RST, the tooling is simple. Because Markdown lacks semantic richness, I'm not switching.

I've found that semantically rich markup is essential. This is a key feature of RST. It's carried forward by Sphinx to create very sophisticated markup. Think :code:`sample` vs. :py:func:`sample` vs. :py:mod:`sample` vs. :py:exc:`sample`. The final typesetting may be similar, but they are clearly semantically distinct and create separate index entries.

A focus on Markdown seems to be a limitation. It's encouraging to see folks experiment with literate programming using Markdown and GitHub. Perhaps other folks will look at more sophisticated markup languages like RST.

Previous Exercises

See https://sourceforge.net/projects/stingrayreader/ for a seriously large literate programming effort. The HTML is also hosted at SourceForge: http://stingrayreader.sourceforge.net/index.html.

This project is awkward because -- well -- I have to do a separate FTP upload of the finished pages after a change. It's done with a script, not a simple "git push." SourceForge has a GitHub repository. https://sourceforge.net/p/stingrayreader/code/ci/master/tree/. But. SourceForge doesn't use  GitHub.com's UI, so it's not clear if it supports the gh-pages feature. I assume it doesn't, but, maybe it does. (I can't even login to SourceForge with Safari... I should really stop using SourceForge and switch to GitHub.)

See https://github.com/slott56/HamCalc-2.1 for another complex, LP effort. This predates my dim understanding of the gh-pages branch, so it's got HTML (in doc/build/html), but it doesn't show it elegantly.

I'm still not sure this three-branch Literate Programming approach is sensible. My first step should probably be to rearrange the PyLit3 project into this three-branch structure.

Thursday, September 23, 2010

Comments, Assertions and Unit Tests

See "Commenting the Code". This posting tickled my fancy because it addressed the central issue of "what requires comments outside Python docstrings". All functions, classes, modules and packages require docstrings. That's clear. But which lines of code require additional documentation?

We use Sphinx, so we make extensive use of docstrings. This posting forced me to think about non-docstring commentary. The post makes things a bit more complex than necessary. It enumerated some cases, which is helpful, but didn't see the commonality between them.

The posting lists five cases for comments in the code.
  1. Summarizing the code blocks. Semi-agree. However, many code blocks indicates too few functions or methods. I rarely write a function long enough to have "code blocks". And the few times I did, it became regrettable. We're unwinding a terrible mistake I made regarding an actuarial calculation. It seemed so logical to make it four steps. It's untestable as a 4-step calculation.
  2. Describe every "non-trivial" operation. Hmmm... Hard t0 discern what's trivial and what's non-trivial. The examples on the original post seems to be a repeat of #1. However, it seems more like this is a repeat of #5.
  3. TODO's. I don't use comments for these. These have to be official ".. todo::" notations that will be picked up by Sphinx. So these have to be in docstrings, not comments.
  4. Structures with more than a couple of elements. The example is a tuple of tuples. I'd prefer to use a namedtuple, since that includes documentation.
  5. Any "doubtful" code. This is -- actually -- pretty clear. When in doubt, write it out. This seems to repeat #2.
One of the other cases in the the post was really just a suggestion that comments be "clear as well as short". That's helpful, but not a separate use case for code comments.

So, of the five situations for comments described in the post, I can't distinguish two of them and don't agree with two more.

This leaves me with two use cases for Python code commentary (distinct from docstrings).
  • A "summary" of the blocks in a long-ish method (or function)
  • Any doubtful or "non-trivial" code. I think this is code where the semantics aren't obvious; or code that requires some kind of review of explanation of what the semantics are.
The other situations are better handled through docstrings or named tuples.

Assertions

Comments are interesting and useful, but they aren't real quality assurance.

A slightly stronger form of commentary is the assert statement. Including an assertion formalizes the code into a clear predicate that's actually executable. If the predicate fails, the program was mis-designed or mis-constructed.

Some folks argue that assertions are a lot of overhead. While they are overhead, they aren't a lot of overhead. Assertions in the body of the inner-most, inner-most loops may be expensive. But must of the really important assertions are in the edge and corner cases which (a) occur rarely and (b) are difficult to design and (c) difficult to test.

Since the obscure, oddball cases are rare, cover these with the assert statement in addition to a comment.

That's Fine, But My Colleagues are Imbeciles

There are numerous questions on Stack Overflow that amount to "comments don't work". Look at at the hundreds of question that include the keywords public, protected and private. Here's a particularly bad question with a very common answer.
Because you might not be the only developer in your project and the other developers might not know that they shouldn't change it. ...
This seems silly. "other developers might not know" sounds like "other developers won't read the comments" or "other developers will ignore the comments." In short "comments don't work."

I disagree in general. Comments can work. They work particularly well in languages like Python where the source is always available.

For languages like C++ and Java, where the source can be separated and kept secret, comments don't work. In this case, you have to resort to something even stronger.

Unit Tests

Unit tests are perhaps the best form of documentation. If someone refuses to read the comments, abuses a variable that's supposed to be private, and breaks things, then tests will fail. Done.

Further, the unit test source must be given to all the other developers so they can see how the API is supposed to work. A unit test is a living, breathing document that describes how a class, method or function behaves.

Explanatory Power

Docstrings are essential. Tools can process these.

Comments are important for describing what's supposed to happen. There seem to be two situations that call for comments outside docstrings.

Assertions can be comments which are executable. They aren't always as descriptive and English prose, but they are formal and precise.

Unit tests are important for confirming what actually happens. There's really no alternative to unit testing to supplement the documentation.

Thursday, November 26, 2009

Python Book -- Version 2.6

Completely revised the Building Skills in Python book.

It now covers Python 2.6 and is much, must easier to maintain in ReStructured Text markup, formatted with Sphinx and LaTeX (via TeXLive) than it was in XML.

XML -- while modern and clean and uniform -- isn't as convenient as LaTeX and RST.

Wednesday, June 24, 2009

Semantic Markup -- RST vs. XML

I have very mixed feelings about XML's usability.

An avowed goal of the inventors of XML was "XML documents should be human-legible and reasonably clear." While I like to think that "legible" means usable, I'm feeling that legibility is really a minimal standard; I think it's a polite way of saying "viewable with any text editor."

I've got some content (my Building Skills books) that I've edited with a number of tools. As I've changed tools, I've come to really understand what semantic markup means.

Once Upon A Time

When I started -- back in '00 or '01 -- I was taking notes on Python using BBEdit and other text-editor tools. That doesn't really count.

The first drafts of the Python book were written using AppleWorks; the predecessor to Apple's iWork Pages product. Any Mac text editor is a joy to use. Except, of course, that AppleWorks semantic markup wasn't the easiest thing to use. It was little more than the visual styles with meaningful names.

Then I converted the whole thing to XML.

DocBook Semantic Markup

The DocBook XML-based markup seemed to be the best choice for what I was doing. It was reasonably technically focused, and provided a degree of structure and formality.

To convert from AppleWorks, I exported the entire thing as text and then used the LEO Outlining Editor to painstakingly -- manually -- rework it into XML.

At this point, the XML tags were a visible part of the document, and editing the document means touching the tags. Not the easiest thing to do.

I switched to XMLmind's XXE. This was nice -- in a way. I didn't have to see the XML tags, but I was heavily constrained by the clunky way they handle the XML document structure. Double-clicking a word can lead to ambiguity on which level of tag you wanted to talk about.

The XML was "invisble" but the many-layered hierarchical structure was very much in my face.

RST Semantic Markup

After becoming a heavy user of Sphinx, I realized that I might be able to simplify my life by switching from XML to RST.

There are a number of gains when moving to RST.
  1. The document is simpler. It's approximately plain text, with a number of simple constraints.
  2. Editing is easier because the markup is both explicit and simple.
  3. The tooling is simpler. Sphinx pretty much does what I want with respect to publication.
There is just one big loss: semantic markup. DocBook documents are full of <acronym>TLA</acronym> to provide some meaningful classification behind the various words. It's relatively easy to replace these with RST's Interpreted Text Roles. The revised markup is :acronym:`TLA`.

The smaller, less relevant loss, is the inability to nest inline markup. I used nested markup to provide detailed <function><parameter>a</parameter></function> kind of descriptions. I think :code:`function(x)` is just as meaningful when it comes to analyzing and manipulating the XML with automated tools.

The Complete Set of Roles

I haven't finished the XML -> Sphinx transformation. However, I do have a list of roles that I'm working with.

Here's the list of literal conversions. Some of these have obvious Sphinx/RST replacements. Some don't. I haven't defined CSS markup styles for all of these -- but I could. Instead, I used the existing roles for presentation.

.. role:: parameter(literal)
.. role:: replaceable(literal)
.. role:: function(literal)
.. role:: exceptionname(literal)
.. role:: classname(literal)
.. role:: methodname(literal)
.. role:: varname(literal)
.. role:: envar(literal)
.. role:: filename(literal)
.. role:: code(literal)

.. role:: prompt(literal)
.. role:: userinput(literal)
.. role:: computeroutput(literal)

.. role:: guimenu(strong)
.. role:: guisubmenu(strong)
.. role:: guimenuitem(strong)
.. role:: guibutton(strong)
.. role:: guilabel(strong)
.. role:: keycap(strong)

.. role:: application(strong)
.. role:: command(strong)
.. role:: productname(strong)

.. role:: firstterm(emphasis)
.. role:: foreignphrase(emphasis)
.. role:: attribution
.. role:: abbrev

The next big step is to handle roles that are more than a simple style difference. My benchmark is the :trademark: role.

Adding A Role

Here's what you do to add semantic markup role to your document processing tool stack.

First, write a small module to define the role.

Second, update Sphinx's conf.py to name your module. It goes in the extensions list.

Here's my module to define the trademark role.

import docutils.nodes
from docutils.parsers.rst import roles

def trademark_role(role, rawtext, text, lineno, inliner,
options={}, content=[]):
"""Build text followed by inline substitution '|trade|'
"""
roles.set_classes(options)
word= docutils.nodes.Text( text, rawtext )
symbol= docutils.nodes.substitution_reference( '|trade|', 'trade', refname='trade' )
return [word,symbol], []

def setup( app ):
app.add_role( "trademark", trademark_role )

Here's the tweak I made to my conf.py

import sys, os
project=os.path.join( "")
sys.path.append("/Users/slott/Documents/Writing/NonProg2.5/source")
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.ifconfig', 'docbook_roles' ]

That's it. Now I have semantic markup that produces additional text (in this case the TM symbol). I don't think there are too many more examples like this. I'm still weeks away from finishing the conversion (and validating all the code samples again.)

But I think I've preserved the semantic content of my document in a simpler, easier to use set of tools.

Wednesday, May 20, 2009

This sounds complicated, because it is

For a while, I generated documentation with Cheetah. I wrote bodies as a fragment of HTML and used Cheetah to wrap those bodies in standard templates with navigation and branding.

To write my books, I learned DocBook markup and used DocBook XSL tools to create HTML and PDF versions of the book's text. Even though XML is hard to work with, I managed to muddle through. It's painful -- at times -- but doable.  

[Eventually, I found XMLMind's XML Editor.  It rocks.  But that's off-topic.]

Then, I fount RST and RST2HTML.  For a while, I wrote my documentation in RST and used a simple script to create the HTML version of the documentation from RST source.

Why ReStructuredText?

From their site: "reStructuredText is an easy-to-read, what-you-see-is-what-you-get plaintext markup syntax".  
  • Easy-to-Read.  The markup is very, very simple.  Mostly spacing and simple quoting.  Yet, for edge cases, there is enough richness to approach DocBook XML.
  • WYSIWYG.  The markup doesn't get in the way; you write the text with a few conventions for spacing and quoting.
  • Plain Text.  A few spacing and quoting rules are used to distinguish structure from content.  Presentation is a limited part RST (like HTML where some presentation is present in the structural markup, but can be avoided.)
RST lead me, eventually to Sphinx

The Secret of Sphinx

Sphinx is RST-based markup.  You write in plaintext (plus some quoting and spacing) and you get an elegant HTML web site with inter-document references all resolved correctly, contents, indexes, auto-generated API documentation for your Python software, syntax coloring, everything.  Wow.

I can't stop myself from doing everything in Sphinx.  You create a development structure for your source files.  You use a series of toctree directives to build the resulting documentation structure that people will see and use.

I've decided to convert some ancient Cheetah-based stuff to Sphinx.  

Unmarking Up

Revising HTML-based document bodies to RST is annoying.  It can be done with Beautiful Soup.  The HTML is pretty regular (and pretty simple) so it wouldn't be too bad.  Except for a bunch of edge cases that have significant complexity.

The original Cheetah-based site wasn't purely documentation.  It doesn't fit the Sphinx use cases perfectly.  A fairly significant percentage of the Cheetah-based pages are HTML pages with complex, embedded applets to do calculations.

These pages are not -- strictly speaking -- documentation.  They're an application.  They contain markup (<embed> mostly) that RST can't generate.  Further, they have to be unit tested prior to running Sphinx to build the documentation, since the HTML is actually part of the application.

Raw HTML?

The applet pages are -- more or less -- raw HTML pages that need to be folded in with the Sphinx-generated documentation.  Sphinx has an HTML_STATIC_PATH configuration parameter that can copy these applications from project folders into destination directories.

But this leaves me with dozens of Cheetah-generated pages as part of this application.  The presence of Cheetah in the midst this Sphinx operation makes things complicated.

Or, perhaps it doesn't.

It turns out that Sphinx is built on Jinja.  There's a template engine under the hood!  That's handy.  That lets me build the application HTML with a slightly different template engine; one that's compatible with the rest of the Sphinx-generated site.

I think I've got a clean, RST-based replacement for my lovingly hand-crafted HTML.  It's a lot of rework, but the simplification is of immense value.

Monday, May 4, 2009

All Those TODO's

About a year ago, we started out doing Python development with simple rst2html documents for requirements, design, etc.  In the code, we had comments that used epydoc with the epytext markup language.


No, it wasn’t confusing.  Free-text documents (requirements, architecture, design, test plans, etc.) are easy and fun to write in RST.  Just write.  Leave the formatting to someone else.  A little semantic markup doesn’t hurt, but you don’t spend hours with MS-Word trying to  disentangle your bullets and your numbering.


Adding comments to code in epytext was pretty easy, also.


Then I discovered Sphinx.   Sphinx can add module documentation to a document tree very elegantly.  Further, Sphinx can pull in RST-formatted module comment strings.  Very nice.


Except, of course, we have hundreds of modules in epytext.  Today, I started tracking down all of the 150+ modules without proper document strings in RST notation.  Hopefully, this time tomorrow, I’ll have a much, much better -- and internally consistent -- set of documentation.