Jump to This TimeTranscription Text
00:00:04Hey, what's up, everyone? I just wanted to give you a quick update on what I've been working on. As you know, if you've been following along for the last few videos, I'm trying to make a robot that is capable of finding pennies that I'd like to add to my own coin collection. In the last video, you saw that I created
00:00:20a little photography studio for coins made from a pill bottle. And what I noticed, and you can see in some of the footage back in that last video, that the bottom of the pill bottle I had kind of chopped out. It looks like I chopped it out with a hatchet or something. So I wanted to kind of get that cleaned up.
00:00:39And the other thing was that when I held the pill bottle over the coin and kind of rested it on the table, the coin wasn't actually filling up the camera's frame the way that I wanted it to. So the first thing I wanted to do was shorten the pill bottle, and I did that by taking a box cutter and just kind of cutting a few millimeters off the bottom.
00:00:59Now, when I hold that studio over the penny, the penny fills the frame of the camera perfectly. So that means you're getting the maximum number of pixels in each image that you can use to identify the coins or train the AI to identify the coin sides. But as you can see, you need to be very careful when you're using sharp implements because they hurt. Ouch! I ended up cutting my thumb.
00:01:25So you may have noticed in the last video, I created a camera target to print out that allowed me to make the camera exactly on the same plane as the coin that I wanted to take a photo of. Well, I figured out over the last couple of weeks that I could actually take that image that I created and overlay it onto the actual camera feed. And that will allow me to place the coin
00:01:49in the exact same position each time I take photos. And that's come in really handy for training the AI, as I can take different pictures of different sides of the coins and actually make sure that they are all in the exact same position. That makes labeling all of these images very easy for the computer to do. What I found out about that mask, though, was that it's kind of semi- transparent
00:02:15on the camera feed, which makes it a little hard to see. So what I ended up doing in OpenCV, was drawing a bright green circle around the area of the camera feed that I was interested in, and that was opaque, and it allowed me to align the camera much more easily. And I think in the last video, I described how I cut two strips of 20 LEDs and I aligned those inside the pill
00:02:40bottle and they were just a little bit offset. So I ran through different scenarios on the Raspberry Pi that allowed me to light different LEDs at different times and light the coins from different angles as I was taking pictures. Now, I want to make sure that each one of those scenarios was unique, either in the number of LEDs that were lit,
00:03:02the angle they were being lit from, and also the brightness of the LEDs. But basically with these two offset strips. I noticed that you can light one top LED or two. Or you can light one bottom LED or two. Or you can have a combination of three LEDs and you can move that sequence down the strip nicely so that again.
00:03:30You create different lighting scenarios for the coin and you can take an image of the coin under all of those different lighting conditions. That allowed me to create a script with 968 unique lighting scenarios. And I could combine that with my previous script that I'm calling the Coin Studio that allows me to take a picture under each of those different lighting conditions.
00:03:54So when we're done with each side of the coin, we end up with approximately 968 images of just one side of the coin, all lit from various and unique angles. Now, it actually ended up not being exactly 968 images for each side of the coin as under certain circumstances, like the penny being particularly shiny, for example, certain lighting scenarios would blow out the image sensor on the camera and you'd
00:04:22get a completely white image with no data on it. Now, I was trying to look at the particular scenarios and the different lighting conditions that would cause that, but I wasn't able to narrow it down and it seemed to be unique to certain pennies. So what I ended up doing was just kind of manually looking through each of those image folders and cleaning out just the ones that were completely blown out.
00:04:45The rest of the images were very useful for training the AI and I could again label them automatically, so it worked out really nicely. The other thing that I could do in OpenCV is since I knew the exact location of the coin, I could crop the coin more tightly and make a square image for each coin. So I knew that the coins would be exactly 480 pixels by 480 pixels and I could crop
00:05:11that out before writing it back to the folder. So that saved a little bit on drive space and it also just saved a little bit of computing power later when I needed to rotate the images. The other thing that I did was slow down the timing between images taken to give the LEDs time to reset and set a new lighting condition before taking the next photo.
00:05:36So I actually ended up setting a half second delay between each photo. The camera is actually capable of shooting 30 frames per second, but it didn't seem like the LEDs were capable of keeping up with that, and I wanted to avoid that blowout again. So with a half second delay between each photograph and there being 968 photographs to take, that means that it took
00:05:58approximately eight minutes to photograph each side of the coin. I mentioned in one of my previous videos how the US mint has created at least 14 different designs for US. Small cents since they were introduced in 1856. But 14 different designs times 968 images of each design times rotating each one of those images through
00:06:25360 degrees resulted in 6,621,120 distinct images of pennies that I could use later to train the AI. And that worked out really nicely, except that it took over 21 hours just to run a single epoch to train the neural network. I actually tried several times to improve my model's efficiency and improve its performance so that it would run faster,
00:06:55but it turns out that my first try was actually the best. Now, one of the things that I noticed with my original training set was that some of the images contained JPEG artifacting in the corners that were supposed to be completely masked off. Those areas are not of interest to me. And they should not be of interest to the AI at all also.
00:07:18But often the AI will see that artifacting in the areas that aren't of interest. And it will trigger on those as part of its training. It will see those extra pixels in those areas as a way to identify coins. Which is something that we want to avoid. Now, to fix that, I actually had to go back in and make sure that all of the corners of the images were completely masked off with black pixels.
00:07:46And I kind of reverted back to just using open CV to draw a big black circle around the coin instead of trying to map a mask on top of it to blank out those areas that weren't of interest to the AI. And once that was complete, the problem with that was that I had to run the training all over again because we had a completely new data set. So I let that run for one epoch, and we're getting some promising results.
00:08:13And the output of my model is it's showing me the correct name for the design, and it's also showing me the rotation angle that it is out of alignment. The one problem that I am running into is that the model is 100% confident that it's correct, even when it's wrong. That's called overfitting. If anyone has any suggestions about how
00:08:36to fix that or things that I might try, please leave a comment down below. So in the next video, I hope to have that put together where another neural network is able to recognize specific features of a given design and let me know if it's a coin that I would like to add to my collection. So if you're into that sort of thing, I hope you'll hit the like button, that you'll subscribe to the channel, and that you'll leave a comment down below
00:08:54if you have any suggestions for how I can improve my model. But until next time, that's my two cents, and I hope you have a great day. I look forward to seeing in the next video. Thanks, everyone.