I have a webview in my Windows 8 (Metro) app that I will use to display much of my content. The webview automatically scales any CSS dimensions to 100%, 140%, and 180%. This means that when I specify:
#square {
width:100px;
height:100px;
background-color:white;
}
...we get a nice square that is 100, 140, or 180 device pixels, depending on the display. So far, so good.
Further, if I supply an image that is 100px square, the OS correctly scales it to 140 and 180 as appropriate on higher density screens.
Further still, if I supply versions of the image that are 100px, 140px, and 180px, and I indicate the size as 100px in the CSS, like this:
#my_image {
width:100px;
height:100px;
}
The OS uses an area that is 100 dp square (that is to say, 100, 140, or 180 device pixels square as appropriate) and automatically selects the right image. So far, still good.
The problem occurs when I try to use images with density qualifiers without naming a literal size in CSS. Why would I want to do this? I have lots of images with variable sizes. I'd prefer to just allow the webview to infer the appropriate size based on the dimensions of the images.
So I expect that if I supply 100, 140, and 180 versions of an image, the OS will be smart enough to say, "Ah, this is a 100-dp image that happens to have additional versions available."
What actually happens, however, is this.
I supply images:
- square.scale-100.png
- square.scale-140.png
- square.scale-180.png
The OS picks the appropriate one. On a 180% screen, it picks the version that is 180 device pixels square. Recall, however, that we made it 180 device pixels because it was the 180% version of the 100 dp image. We want it to actually take up only 100x100 dp of space.
However, the OS takes 180 as the size in DP. So it scales it by 180% again.
How can I avoid this double-scaling? Any pointers would be awesome!