Tuesday, August 15, 2023

New Accelerometer Data, and more NN 'mind'

I've finally rigidly connected my accelerometer to my e-bike, and recording the g-force in x, y, z, and total.  The signal to noise has improved greater than a factor of two.  Here's a recording before the new connection, and then after the new connection.  You can clearly see that the signal is stronger on the bottom plot


This produces spectrograms that are much more complex.  Here's a spectrogram of the x data before the new connection:



... and then after the new connection:


I have no idea what all of those peaks are, and why they wander around so much.  I can still see the 6 Hz line (which is the tire rotation), which in this case looks pretty steady -- so those wiggly lines probably aren't correlated with speed.  Some lines go up while at the same time others go down, so I have no idea what I'm seeing.  Obviously vibrations of some sort, but I'm not sure where they're coming from, and most likely they're coming from multiple sources.

Here's the same two data sets, but looking at the y-axis data (this one surprised me a little):





I'll be collecting the acclerometer data with this configuration for the forseeable future.  The S/N is great and the spectra are baffling!

In other news, I'm starting to convince myself that grabbing 65536 points from every 128 values is the correct way to go since that the structure of the python arrays and the output model (via layer.save_model()).  There are still strange patterns, but some of those are probably residuals from the convolutions and pooling in previous layers.  Anyhow, maybe this is pretty convincing:






so, maybe this is correct.  Gotta convince myself somehow.  I'll look at the weights from the other models I have and see what they can show me.  For now, these are weights created from synthetic image training (ahem, I think).

Here's what I said on my discord server:

"An attempted visualization of one set of weights of one 'dense' layer of my fully trained multi-layer convolutional neural network.  The white borders are not part of the model: they represent the structure of the input data to this layer (in this case, 64 32x32 'images').  The weights shown here are applied to the input data (via a simple dot product), which results in a single data value as an output.  This output (along with all other outputs in this layer) is then used as input for the next layer in the network."


No comments:

Post a Comment