Project Alpha
The home of Project Equinox

IoI V2 (and its improvements)

[Edit]

Posts in This forum
Oliver said on 14:08:48 15-Oct-2014

Right. Before I go into improvements for v2 of IoI or Images of Images, let me just tell you a brief bit of history behind what IoI is and how v1 worked.

Take a closer look at the picture on the left of this post by clicking on it and looking at it full-size - you may recognise it as Someone from Somebody verses Something - but all is not as it seems .....

IoI takes hundreds of thousands of images off the internet, or in this case out of my personal data storage "stash", shrinks them to a size of around 12 x 10 pixels and then catalogues the average colour for that image. I then show the computer another image, hopefully an HD one, and it gets rendered into 12 x 10 pixel blocks. Each block is colour averaged and is replaced by the 12 x 10 pixel image that matches that colour, either exactly or as close as possible.

There is an issue. Whilst my computer is fast at rendering images, (at least 28 tiles per every hundredth of a second, cross-referencing over 100,000 images per second) we had issues. There were over a million entries in the image library, which meant each rendered tile could take 10 seconds or more to calculate. The full scale image attached to this post has 16,080 individual tiles, which means doing the whole image in the v1 standard, would take 160,800 seconds! That's 2,680 minutes, or 44 hours ... or 1.86 days.

Something needed to be done about this. Particularly from a bug-hunting point of view. I know I can use smaller images, but that was the lazy way out. No; the idea that I incorporated was an interesting one for me as I rarely need to do them. It as a look-up table... or as I refer to them LUTs

So: every image in the database has a colour value. Every entry in the database has a unique ID. Instead of just working out what an image value is, I decided that for each colour value scanned, I would store the desired image unique ID against it, as well as a delta value (how much the scan colour is away from the actual issued image; a number between 0 and about 10.3 in the current dataset)

Image processing was immediately stopped and restarted with the new v2 code. Initially, the program was as slow as v1... naturally with no data in the LUT, the computer had to make it. But as soon as there was enough data in the LUT, images, particularly those containing our sample inserted data, started to perform a lot faster during rendering. A LOT faster. Individual tiles dropped from their 10 or so seconds to below 0.01 seconds.

This is my first of many improvements.

Just to calculate times between v1 and v2 for this image:
v1 with 16,080 tiles at 16.66 t/sec presently = 3.10 days
v2 with 16,080 tiles (9581 to make at 16.66 t/sec and 6498 known and retrieved at 2800 t/sec) = 1.85 days

Oh and a sneak peek? I did that image in v3 and it took 1.27 HOURS!

About

Information will appear here

Philosophy

Information will appear here

Contact

+44 (0) 7535 692215
Project Alpha The home of Project Equinox