0
votes

I'm working on manually parsing the glyphs from a TrueType (.ttf) font file. Depending on the size of the file, these can vary from hundreds to thousands of different glyphs. These glyphs can be segmented into discrete ranges, such as Latin, Greek, Cyrillic etc.

I'm just wondering, how do conventional text applications such as Microsoft Word handle such a large variety of glyphs? Is a certain range of characters loaded upon initialization, and special characters loaded when they're required? If so, would I be expected to keep the font data permanently in memory to parse font data from, or would it be better to periodically open, stream and close from the source file when required?

1

1 Answers

4
votes

Pretty much all systems load glyphs on demand. Pretty much all of them also keep used font files mapped into memory.

In Windows, as far as I understand, at least with GDI, some core of font handling is actually implemented in the kernel. That allows sharing the font memory usage (the mapped file as well as rasterization results cache) amongst processes. Linux implements things very differently, as with modern fonts, every process rasterizes the needed glyphs from scratch. The X server however shares glyph cache across processes but that's an implementation detail.

At any rate, load glyphs on demand. There's no reason not to do it that way. Might be able to help more if you can be more specific about what you want to do.