I'm working fairly hard on a R&D project. I'm trying to build a composite image from a whole bunch of drone photos. The software works really well up to a certain level of detail, then falls apart completely. That's fairly typical. Almost any gizmo or computer program, or algorithm has built-in assumptions that cause it to fail when it's used or applied to a novel set of conditions.
Basically when trees look like a blob of branches to the image stitching and reconstruction algorithms, it works, but once there's enough detail so the tree image has a trunk and branches, the difference in perspective from one photo to another causes the reconstruction algorithms to fail.
I can really see the utility in neural networks for these image processing algorithms. The "traditional" methods--this stuff really only dates back to the 1990s for the most part so calling it "traditional" is pretty funny--rely on geometry and simple arithmetic operations to extract features from a digital photo. Most image processing techniques are an attempt at an algorithmic imitation of nature, that is the human visual system. They kind of suck at it, though.
The neural-networks way is a better form of mimicry. Now that there is neural network hardware it's feasible to "train" such systems to perform tasks like detecting edges in an image so it's similar to the performance of the human visual system.
If you do that task manually, for example, if you took a sharpy and picked out the outlines on a printed out photograph, you wouldn't even have to think about it 99% of the time, then when there was some ambiguity in an outline, for example, if a tree branch obscured a part of a car, you'd easily distinguish between the "signal" and the noise using your a-priori knowledge of the world.
In theory you could "train" a neural network to do the same thing.
So here's the weird part. This is basically what my entire blog is about.
The information that gets encoded in the neural network via training is basically opaque gibberish. The "trainer" is basically encoding reality into a black box.
The Age of Reason/Enlightenment form of understanding the world is to reduce it to a toy model. The belief of that school of thought is the toy model is a glimpse into how the universe really works. That is, the age of reason thinker believes the universe is fundamentally mathematical or algorithmic. They confuse their toy model with reality. Our whole civilization revolves around these shitty toy models. Governments and corporations, for example function on this premise--they're imitation humans.
The neural network based technology is a significantly different philosophical approach that's much more in tune with the being-ness of reality. It sort of obviates the toy model approach.
No comments:
Post a Comment