11
votes

somewhat related to: libxml2 from java

yes, this question is rather long-winded - sorry. I kept is as dense as I felt possible. I bolded the questions to make it easier to peek at before reading the whole thing.

Why is sax parsing faster than dom parsing? The only thing I can come up with is that w/ sax you're probably ignoring the majority of the incoming data, and thus not wasting time processing parts of the xml you don't care about. IOW - after parsing w/ SAX, you can't recreate the original input. If you wrote your SAX parser so that it accounted for each and every xml node (and could thus recreate the original), then it wouldn't be any faster than DOM would it?

The reason I'm asking is that I'm trying to parse xml documents more quickly. I need to have access to the entire xml tree AFTER parsing. I am writing a platform for 3rd party services to plug into, so I can't anticipate what parts of the xml document will be needed and which parts won't. I don't even know the structure of the incoming document. This is why I can't use jaxb or sax. Memory footprint isn't an issue for me because the xml documents are small and I only need 1 in memory at a time. It's the time it takes to parse this relatively small xml document that is killing me. I haven't used stax before, but perhaps I need to investigate further because it might be the middle ground? If I understand correctly, stax keeps the original xml structure and processes the parts that I ask for on demand? In this way, the original parse time might be quick, but each time I ask it to traverse part of the tree it hasn't yet traversed, that's when the processing takes place?

If you provide a link that answers most of the questions, I will accept your answer (you don't have to directly answer my questions if they're already answered elsewhere).

update: I rewrote it in sax and it parses documents on avg 2.1 ms. This is an improvement (16% faster) over the 2.5 ms that dom was taking, however it is not the magnitude that I (et al) would've guessed

Thanks

4
I'd say the question of which is faster is irrelevant for your purposes, because you need to make arbitrary queries against the tree. Which means that you have to build some representation of the tree, and have some way to create queries against it. So either you use DOM/XPath, or you write your own equivalents. - Anon
I suspect, however, that your real issue is not SAX vs DOM per se, but how your system is configured and/or how you're accessing the data. It really shouldn't take that long to parse a "small" document using DOM (or one of the DOM equivalents). Have you quantified the difference (that you're seeing) between SAX and DOM? - Anon
I have quantified the DOM approach. small (approx. 300k) xml documents. The current implementation is using xerces-j and it takes about 2.5 ms per xml document on a 1.5 GHz machine. to quantify sax is somewhat dependent on how much of the xml you choose to keep around and what you do with it. you're right - I don't think sax will work for me - the question was more out of curiosity. - andersonbd1
2.5 ms really doesn't seem that bad. If you're just looking to satisfy curiosity, I'd suggest the following comparison programs: (1) read the file using an InputStreamReader that does a UTF-8 conversion, and (2) parse the document via SAX, using an empty DefaultHandler (ie, let it parse and dispatch, but don't do anything with the results). - Anon
That said, garbage collection can be an issue if you're pushing a lot of documents through DOM: they tend to stick around long enough to get into the tenured generation. - Anon

4 Answers

15
votes

Assuming you do nothing but parse the document, the ranking of the different parser standards is as follows:

1. StAX is the fastest

  • The event is reported to you

2. SAX is next

  • It does everything StAX does plus the content is realized automatically (element name, namespace, attributes, ...)

3. DOM is last

  • It does everything SAX does and presents the information as an instance of Node.

Your Use Case

  • If you need to maintain all of the XML, DOM is the standard representation. It integrates cleanly with XSLT transforms (javax.xml.transform), XPath (javax.xml.xpath), and schema validation (javax.xml.validation) APIs. However if performance is key, you may be able to build your own tree structure using StAX faster than a DOM parser could build a DOM.
12
votes

DOM parsing requires you to load the entire document into memory and then traverse a tree to find the information you want.

SAX only requires as much memory as you need to do basic IO, and you can extract the information that you need as the document is being read. Because SAX is stream oriented, you can even process a file which is still being written by another process.

12
votes

SAX is faster because DOM parsers often use a SAX parser to parse a document internally, then do the extra work of creating and manipulating objects to represent each and every node, even if the application doesn't care about them.

An application that uses SAX directly is likely to utilize the information set more efficiently than a DOM "parser" does.

StAX is a happy medium where an application gets a more convenient API than SAX's event-driven approach, yet doesn't suffer the inefficiency of creating a complete DOM.

2
votes

SAX is faster than DOM (usually felt when reading large XML document) because SAX gives you information as a sequence of events (usually accessed through a handler) while DOM creates Nodes and manages the node creation structure until a DOM tree is fully created (as represented in the XML document).

For relatively small files, you won't feel the effect (except that possibly that extra processing is done by DOM to create Node element and/or Node lists).

I can't really comment on StAX since I've never played with it.