I'm trying to find out if the python library Dragonfly can use the context and grammar you give it to improve recognition. The idea being that if the speech recognition engine itself knows the grammar of what you can say, then recognition should be greatly improved, but if the Dragonfly library is just checking if arbitrary dictation taken from the recognizer matches the grammar, I'd expect no improvement.
Also, since Dragonfly supports both Dragon and Windows Speech Recognition, it'd be helpful to know if the answer differs depending on the engine.