6 Comments
User's avatar
JamesLeng's avatar

> This is surprising because initially, I figured that if computers' superpowers had some sort of "inverse" or "dual", it would be equally as complex. But it's straight-forward. I'm still not quite sure why this is the case. For now, my working hypothesis is that subtracting from an arbitrarily large mass is innately more complex than adding to an empty set.

A sensor's symmetric complexity is mostly on the "outside world" portion of the system, but also partly hidden in plain sight by components which happen to be easy to define in bulk. If you had to build a telescope's ten-meter-wide main mirror by directly adding or subtracting along a grid of perfectly cubic blocks, instead of polishing, how tiny would those cubes need to be? What vast complexity could any 3D printer capable of such a task instead encode on an object of equal size?

Expand full comment
thefance's avatar

Sorry, my assumptions were communicated poorly.

I think complexity is usually a matter of mental-models. Mental-models don't always have to operate at the lowest level of supervenience. E.g. a clay mug can be obscenely complex at the level of quantum mechanics, or much less complex if we're just making a triangle-mesh in Maya. Yes, a telescope lens has lots of atoms. But software rarely cares about atoms (sanity check: does substack track the state of atoms or keypresses). So I don't think that's the right level of complexity to operate on, wrt the Binary Classification Theory.

Consider that an algorithm or address which performs its job competently will be highly specific. E.g. arguably the least-useful algorithm is Brute-Force Search (which doesn't constrain the hypothesis space at all), while a good algorithm narrows the search-space tightly (unlike a Bloom Filter). Thus, a good algorithm is necessarily specific in the sense that it specifies a small region of search-space. And complexity is the price of supporting that specificity. But a sensor, in order to be a good and competent sensor, only needs to reliably sense and transmit data. The data doesn't need to be complex. E.g. a sensor could transmit only a single bit of data at a time, and still be considered a good and competent sensor, so long as that bit of data was trustworthy, insofar as its state corresponded to that of the referent.

The fact that telescope mirrors are complex (insofar as they comprise lots of atoms) is moreso an artifact of engineering, I think, than a generalizable observation about information theory.

In the mean time, my current working-hypothesis is that i'm conflating two different meanings of the word "sensitivity". i.e. sensitivity qua set-inclusion seems different than sensitivity qua state-tracking. But on the other hand, they do seem related, at least ostensibly. And I don't know why that would be the case. So... I guess it's still an open problem.

Expand full comment
JamesLeng's avatar

A typical clay mug could have several cubic millimeters of material added or removed just about anywhere on its surface without notably compromising core function, so long as the result isn't a leak, or a sharp bit positioned to irritate the user. Telescope mirror has much stricter GD&T specs for acceptable deviation from perfect flatness, because grime or scratches will produce errors in the data it's meant to be measuring. Accordingly, the plans for a clay mug (or equivalently crude mechanical sensor, such as a deerchaser measuring flow rate) can be compressed further without loss - any given measurement has fewer significant digits - and thus contain less information.

Expand full comment
thefance's avatar

But how is this related to *sensitivity*? To convince me that the telescope idea is correct and/or relevant, it needs to relate not to complexity, but the abstract idea of sensitivity.

In this frame, neither the lenses nor mirrors are the sensor that's actually doing the sensing. The lenses and mirrors are merely shepherding the light either to someone's retina, or a film substrate. To be a good sensor, each photoreceptor or pixel of film should ideally "sense" the image by activating *conditionally*, depending on the features present in the image. The lens/mirror combination doesn't exhibit this sort of conditionality. If anything, I think the lens/mirror combo is analogous to that of a GPU, bus, buffer, etc in that it processes photons in parallel and tries to preserve the low-entropy of the image. Which makes the precision of a lens/mirror setup an artifact of specificity rather than sensitivity. Additionally, a typical telescope is just as specific as it is sensitive. I.e. it achieves the resolution it does by zooming on a highly specific portion of the sky, while completely ignoring the rest of the sky.

Expand full comment
JamesLeng's avatar

A single-point deformation of the lens or mirror won't just be corrupting the pixel it was meant to shepherd, it'll also be adding in misdirected light that interferes with other parts of the sensor.

Consider a "telescope" with a wide-angle fisheye lens intended to capture the entire night sky - near maximally non-specific - but exclude light sources within a few degrees of the horizon (since those are expected to be mostly atmospheric and/or human-created). A hairline crack develops near the peak of that domed lens. The crack itself is so thin, and so nearly vertical, that on a typical night no detectable stars are occluded by it directly... but horizon-glare which would otherwise have passed through the lens without interference is redirected onto the underlying detector at unplanned angles, adding background noise which makes faint stars easier to miss no matter where in the sky they are.

Expand full comment
thefance's avatar

We could also just completely occlude the line of sight with a lens cover. Or maybe the sky happens to be overcast. Yes, the telescope can no longer sense the object we *intend*, but I don't think that diminishes the abstract sensitivity of the film or retina. The retina senses whatever is put in front of it. Whether that happens to be a night sky, a distorted night sky, a cloud cover, or a lens cover... the sensitivity of the retina is unaffected. No?

Expand full comment