OLED HUD Development

OLED HUD Development

To begin with, we run the basic example OLED code, and accelerometer code on the same setup to make sure that the wiring and devices are ok and there's no conflict. This might seem a waste of time but allows us to confidently move onto more complex code knowing that the basic system is good.

Development can now proceed down two paths, before merging: in the one path is thinking about the application architecture, the layout of the code for the Arduino, and the other is the design and aesthetic. Because there's only me working on it, I split my project time evenly between these two.

I tried several tools to convert bitmaps into the suitable C++ code, but instead got annoyed with having to convert my photoshop images into bitmaps (Which is an annoying extra set of mouse clicks) - I didn't want to work in a bitmap but instead use layers whilst I was thinking about the HUD, so instead I wrote a tiny bit of python to convert an image into C code, that I can then just copy and paste into place.

Furthermore, it was super important to try the different HUD experiences on the actual device - what looked like a good idea on my computer (often when blown up), could look terrible on the device as the OLED brightnesses means some sharp images did not show up as expected - for example, these are 50 nerf bullets, but when on the display, don't look s great as they could be - a better bullet indicator is needed!

I also realised quite quickly that I had to be taking advantage of animating the display. Whilst the hud previews sometimes looked good, a simpler, but animated, HUD would be cooler, and I stress here, Cooler does not mean better as a targeting (who wants a moving reticule anyway!)

From the programming perspective, I was beginning to break my program down into 'business logic' (i.e. working out what values would need to be displayed), vs 'display logic', (i.e. animation routines). This means my code is beginning to layout very similar to the Pygame code, with a strict loop and trying to stay away from any slow events.

The current prototype looks a little bit like this, with an animated reticule.

The addition of the MPU6050 is to allow it to be a closed system with regards to elevation - direction may be possible due to other considerations (depending on some of the points below), and I am considering adding a hall-effect sensor so direction is known. Initially, though, the addition of an 'artificial horizon' (And I use that term knowingly badly because it is not a true artificial horizon) allows some elements to be displayed easier.

Architecture for the entire sentry gun is proceeding, with some design decisions to be made (Such as the computer vision elements). Some decisions need to be made, such as is the Pi/vision a separate targeting (like a spotter) system or built-in. Does it use one camera, or two (or one with a spinning mirror?)

All this, and more, to be continued. The project offers a lot of individually interesting challenges.