Sunday, September 3, 2023

More work towards 'visualizing' complex neural networks

I'm moving quicky into a realm where the only thing that really matters are histograms showing the ditribution of data for a particular sample.  And when it comes down to it, that's all a NN has, too!  At this point I think that's the direction that will lead to some form of 'visualization'.  I put that in quotes because it may or may not include information perceived through our eyes.  It may turn out, for instance, that hearing the network is the best way to understand it, or maybe some way of understanding it through color.  I have no idea, and that's why I'm doing it: because it's an interesting challenge that currently has everyone stumped.  Finding something that works could lead to who knows what?

If I don't find anything useful, I'm still forced to completely understand NNs in order to try to represent them in form that humans can have a chance at understanding.  This will make me better at designing and using NNs, which will help with my understanding, and round it goes.

Here's the sort of stuff I'm looking at.  These are histogram plots of the differences between the initial weights and the trained weights for one of the dense layers (with 128 connections) of the CNN I'm using for my object ID project.




The x-range is -1 to 1, and the y range is 0 to 2000.  I see no pattern.  The fact remains, however, that what I'm seeing here is the difference between a model that will just randomly choose an ID each time (and therefore choose incorrectly most of the time), and a model that will choose the correct ID every single time.

The fact that I don’t see anything (yet) is either an indication that I’m looking at it incorrectly (very likely), or the correct data isn’t being collected/measured/etc (also very likely).

I'll keep trying.

No comments:

Post a Comment