Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla eu dui tellus. Mauris nisi enim, posuere id laoreet eget, laoreet sagittis ante. Vivamus finibus arcu id metus molestie vehicula.
Just kidding. Always good to start off a page with a fresh batch of Lorem ipsum, especially on a tech page exuberantly bubbling with cheerful technobabble. Anyway, herein is even more technical stuff – everything you always wanted to know but not necessarily today. By the way, your basic desktop browser will more easy-to-use for this page, what with the hashes and copy-and-paste of code and whatnot.
For fun, we build a unique robot for each web page with information derived from that web page. The robot, then, is a simple image representing that web page. If the robot changes, the web page must have changed. If the web page changes, the robot changes. (Of course, the website is served with HTTPS, ensuring what is rendered in your browser is really what was intended.)
Robot graphics are generated by a local copy of the most excellent RoboHash code created by Colin Davis. See this write up about RoboHash. The RoboHash code takes a string, any string, and constructs a robot based on that string. For our robots, the string is a hash (also called a “message digest”) generated with a cryptographic hash function from preprocessed web page markup, processed with our secret brew of interlocking code gears, pulleys, and steam piston). So, before it is hashed, the the web page markup (XHTML5 in this case), is flattened, reduced, and normalized down to a block of only alpha characters. There are several benefits of, and compelling arguments for, some sort of reduction before hashing which might be worth more paragraphs in the future, which will probably happen here due to tight binding glue of several time machine algorithims.
So, suffice to say, the web page hashes are spun up with our own vaguely secret prep-then-hash mechanisms and, yes, you, too, can do the same thing and generate a hash and see if matches. After generating the web page markup in our vast, underground secret web factories, we hash it. For this work, hashing is done with SHA256. If only building the unique Robohash robots with a small number of permutations relative to a hash’s permutations, we could get by with MD5, say, and not worry about collisions and whatnot. However, SHA256 moves us to a more useful place, versatile for later useful verifications we might want to do.
Note that this is not a comprensive hash of everything that results in the final rendered web page. For example, cascading style sheets (CSS) and images (PNG, SVG, JPEG) are not hashed, just the markup, and not all of the markup at that. The preprocessing of the markup is a arbitrary compression scheme that tosses out data (more on that in a moment). The tossed data is not part of the hash, so to speak, therefore comparing hashes isn’t a complete validation. Even so, utilitarian enough at this level, weighing practical risks of, say, just the punctuation being compromised.
Anyway, want to see the secret code? Keep reading, it’s up just ahead. Indeed, it’s not every day that such light lunch reading magically appears.
There are a million stories in Hash Prep City and this is just one of them. By the way, you can hash anything “as is” – it doesn’t have to be prepped. We prep to reduce and simplify what we are hashing, yet not so reduced that uniqueness is lost. Think of it as one variation on lossy compression. In this arbitrarily determined process, we first flatten the web page markup with the quite useful program tr(1) to ensure a predictable, reduced block of text for hashing by simply stripping out spaces, blanks, punctuation, digits, control characters, and finally anything left that is not printable. This gives a reduced, predictable, consistent block of “enough content that counts,” minimizing error possibility, say, from retrieval or platform text differences. Then we pipe that reduced block of text to the program sha256(1) which outputs the SHA256 hash. Here’s the code snippet, written for ksh(1).
function htmlhash
{
tr -cd "[:print:]" | tr -d "[:space:]" | \\
tr -d "[:punct:]" | tr -d "[:digit]" | sha256
}
Now, pull down the website code with curl(1) and pipe it to the htmlhash function.
curl -s [pageurl] | htmlhash > somehash
Then diff(1) it with the hash listed for the page.
diff somehash publishedhash
As a practical matter for determining validity, there will be an off-line protected list of hashes to diff with hashes generated from retrieved web page stuff. In other words, obviously if the website is compromised, anything can be compromised, including the list of hashes on this page. And, of course, the robots. But this exercise using the published list suffices for trivial proof of concept and other cool buzzphrases.
Note that the program names mentioned above are written in Unix notation style of the program name followed by manual page number in parentheses, generalized as name(section). See man(1), Unix, and Unix-like.
A motley crew, indeed. Notably, this page is the only page not listed. Wait, what? A mystery! The answer to which is left as an exercise for the reader.
what-the
70d8059cab3f74d61719d0588e6fc6c0ef954e1b898c4fff5220d50a42f0079b
index
221eea8841198c1757be565ebcf8782c38ef5bb86d6b22a7004dd97c90cd3f44
Typefaces are courtesy Google Fonts and Font Squirrel. Licenses are documented for each typeface on their respective web pages linked in the colophon.
Web font format is WOFF2 and is the only font format delivered by this site. Nothing but cutting edge here. Typeface data is embedded in CSS for performance. The CSS property control font-variant-ligatures is in play. So is some kerning magic.
In theory, font rendering should only depend on your browser but, for example, in the case of Mac OS X, the OS version is also a factor; WOFF2 is not supported Mac OS X Yosemite and lower even with the very-latest new-fangled WOFF2-capable modern browsers. Peruse browser WOFF2 support.
If your browser of choice can’t handle WOFF2, the browser will roll back to generic defaults, which in any case will be far less of a typographical experience than designed and intended and maybe even cause your computing device to suddenly fold into a black hole, perhaps leaping time and space back to, say, 1874, with no electrical grid, internets or webtubulars to be found. Upgrading is no doubt prudent in that case.
Body fallbacks are typically Georgia and serif. Heading fallbacks are usually HelveticaNeue‑CondensedBold, Arial, and sans-serif.
We use a modular scale to calculate heading sizes. In specific, the ratio of choice here is 1.250 otherwise known as Major Third. H2 and H3 are the only headings used through out, at least with this edition. Heading sizes calculated with Jeremy Church’s magical type scale. The relative value rem is used throughout the CSS, with just a tiny part using px or em. Plenty of good articles on the web about all this and more, starting with this excellent treatise on rem and em.
For light lunch reading, check your browser’s ideally excellent support for HTML5 and CSS3.
Coming soon! All the source, served by mercurial. Yep, no plans to use git around here. Source, no doubt, will definitely include the magical pumpkin code.