For AI, sofar it's failed. chatGPT inaccuractly counts objects in an image that any three-year-old human can count with ease.
Up until now, I've been using this input image as a test, and I've asked chatGPT to count the number of objects in this image:
1. there are a number of ghost-like reflections of nuts that are produced by a pretty terrible optical system. The explanation for this is that there's a crack going through the center of the camera's field of view, producing these ghost images. So how many nuts are there? Hard to tell and not exactly fair.
2. complex background - not only is the background a wood grain, the lighting isn't uniform either.
So how many objects are there? Haha it really does depend on attention.
So I started with something else instead. In this case, thirteen pennies on a fairly uniform background.
I've noticed that 'errors' seem to happen pretty often. I'm pretty sure it's just overload on the server. The fact that it's checking it's work is nice but I can image that might also be annoying.
Anyhow it's first attempt at thresholding to segment this image is pretty good:
Here I ask it to produce a histogram and it does...
Easily able to change the thresholding values...
I'm even able to tell it to change the scale from linear to logarithmic and poof it does it!
I can see now that it's segmenting too finely. The solution to that (something I know) is to blur the image a little. So I tell it to apply a median filter, and poof again it does it!
Now it's just a matter of dialing in the thresholding correctly...
There we go...
It provides some funky labels, but it got 'em!
Please note that it's counting 14 objects. That one penny on the far right is pretty tough to threshold properly so it sees that as two objects. But 1/13 is a score of 92.3%, which is ok and probably at least as good as I am.
At the end, I asked it why it saw 910 objects at first instead of 14?
Overall it's an impressive show as far as an assistant goes.
No comments:
Post a Comment