<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Rantings of a Mad Computer Scientist]]></title><description><![CDATA[The research blog of Joss Whittle]]></description><link>https://www.josswhittle.com/</link><generator>Ghost 1.22</generator><lastBuildDate>Sat, 25 Apr 2026 13:44:29 GMT</lastBuildDate><atom:link href="https://www.josswhittle.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Procedural Spirographs in Python]]></title><description><![CDATA[<div class="kg-card-markdown"><p><img src="https://www.josswhittle.com/content/images/2022/01/path.png" alt="path"></p>
</div>]]></description><link>https://www.josswhittle.com/procedural-spirographs-in-python/</link><guid isPermaLink="false">61f6d725cc3a5822a44be101</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sun, 30 Jan 2022 18:22:14 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><img src="https://www.josswhittle.com/content/images/2022/01/path.png" alt="path"></p>
</div>]]></content:encoded></item><item><title><![CDATA[Simulating Magnetic Fields of Arbitrary Meshes using Radia and TetGen]]></title><description><![CDATA[<div class="kg-card-markdown"><p><img src="https://www.josswhittle.com/content/images/2021/01/magnetic-bunny-8_cropped_cr_cr.png" alt="magnetic-bunny-8_cropped_cr_cr"></p>
<p>Image of a tetrahedralized Stanford Bunny simulated as if it were made of a vertically magnetized metal. The magnetic field is visualized by starting a number of stream curves at random locations around each of the vertices within the tetrahedral mesh of the bunny, and integrating them forwards and backwards</p></div>]]></description><link>https://www.josswhittle.com/simulating-magnetic-fields-of-arbitrary-meshes-using-radia-and-tetgen/</link><guid isPermaLink="false">6015e0d6004295071b794ea5</guid><category><![CDATA[Opt-ID]]></category><category><![CDATA[Synchrotron]]></category><category><![CDATA[Magnets]]></category><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sat, 30 Jan 2021 22:51:29 GMT</pubDate><media:content url="https://www.josswhittle.com/content/images/2021/01/magnetic-bunny-7_cropped_cr-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://www.josswhittle.com/content/images/2021/01/magnetic-bunny-7_cropped_cr-1.png" alt="Simulating Magnetic Fields of Arbitrary Meshes using Radia and TetGen"><p><img src="https://www.josswhittle.com/content/images/2021/01/magnetic-bunny-8_cropped_cr_cr.png" alt="Simulating Magnetic Fields of Arbitrary Meshes using Radia and TetGen"></p>
<p>Image of a tetrahedralized Stanford Bunny simulated as if it were made of a vertically magnetized metal. The magnetic field is visualized by starting a number of stream curves at random locations around each of the vertices within the tetrahedral mesh of the bunny, and integrating them forwards and backwards through the magnetic field in small steps.</p>
<p><img src="https://www.josswhittle.com/content/images/2021/02/bunny-tetrahedral-mesh-1.png" alt="Simulating Magnetic Fields of Arbitrary Meshes using Radia and TetGen"></p>
<p>The magnetic field was simulated using the <a href="https://github.com/ochubar/Radia">Radia</a> magnetostatics framework by Oleg Chubar.</p>
<p>Tetrahedral meshing was perfromed from a triangular mesh of the classic Stanford Bunny using the <a href="https://github.com/pyvista/tetgen">PyVista TetGen</a> wrapper around Hang Si's <a href="https://github.com/ufz/tetgen">TetGen</a> library.</p>
<p>Geometry representation and interfacing between these frameworks was performed using the <a href="https://github.com/DiamondLightSource/Opt-ID">Opt-ID</a> framework as a means of testing its versatility for handling exotic magnet geometries.</p>
<p><img src="https://www.josswhittle.com/content/images/2021/01/magnetic-bunny-7_cropped_cr.png" alt="Simulating Magnetic Fields of Arbitrary Meshes using Radia and TetGen"></p>
</div>]]></content:encoded></item><item><title><![CDATA[Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization]]></title><description><![CDATA[<div class="kg-card-markdown"><table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/Bfield.png" alt="Bfield"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 1: A simulation of the magnetic field in a Pure Permanent Magnet (PPM) Insertion Device. White solid lines denote the boundaries of magnet elements with alternating major field directions. Vectors denote the direction and magnitude of the magnetic field at each location. As an electron travels through the device</td></tr></tbody></table></div>]]></description><link>https://www.josswhittle.com/opt-id-a-system-for-simulating-and-optimizing-synchrotron-insertion-devices-through-swarm-optimization/</link><guid isPermaLink="false">5f6bbc6d004295071b794e96</guid><category><![CDATA[Opt-ID]]></category><category><![CDATA[Synchrotron]]></category><category><![CDATA[Magnets]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Wed, 23 Sep 2020 21:27:17 GMT</pubDate><media:content url="https://www.josswhittle.com/content/images/2020/09/Bfield-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/Bfield.png" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 1: A simulation of the magnetic field in a Pure Permanent Magnet (PPM) Insertion Device. White solid lines denote the boundaries of magnet elements with alternating major field directions. Vectors denote the direction and magnitude of the magnetic field at each location. As an electron travels through the device along the centre-line (horizontal white line) it is oscillated by the alternating upward and downward field regions, encouraging it to emit photons travelling forwards along the beam path.</td>
</tr>
</tbody>
</table>
<hr>
<img src="https://www.josswhittle.com/content/images/2020/09/Bfield-1.png" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"><p>Synchrotron Insertion Devices (ID) are systems of magnets used within straight sections of a synchrotron ring to convert energy stored in an electron beam into a photon beam which can be used for downstream science applications such as x-ray crystallography, spectroscopy, and tomography to name just a few.</p>
<p>IDs contain many high-strength magnets held in close proximity, presenting significant technical challenges for design engineers. The design, construction, and tuning process is a time consuming and delicate task which can take months from first assembly through to ID commission and installation into a synchrotron.</p>
<p>Small imperfections in manufacturing or damage to individual magnets mean that an arbitrary ordering of magnets used during construction may lead to an unacceptable accumulation of small errors along the length of the device, leading to poor performance of the ID or the exceeding of physical tolerances imposed by the design of the synchrotron.</p>
<p>Much of the work in tuning an ID is done manually by specialist ID physicists and engineers. At each synchrotron facility around the world these experts use a variety of strategies, but most often these can be summarized at a high level as a pattern of build, measure, modify, and repeat. A candidate ordering of some or all of the magnets is constructed, the magnetic field of the ID is measured along its length and visualized, and specialists then make informed decisions about where changes should be made to correct the errors that are observed within the ID.</p>
<p>Modern IDs contain many hundreds (sometimes thousands) of individual and high-strength magnet elements, all of which will have small divergences from their intended sizes and magnetizations. This makes the combinatorial search space of distinct magnet orderings extremely large, where most of the orderings would perform poorly and comparatively very few orderings would perform well enough to be used.</p>
<p>Due to the long turnaround for trying out and measuring different magnet configurations, it is desirable to simulate the ID computationally to help determine an approximate magnet ordering before construction begins, significantly reducing the time needed to build and tune an ID.</p>
<p>The Opt-ID software developed by RFI and Diamond Light Source (DLS) in collaboration with physicists at BESSY II (the Berlin synchrotron facility) allows for efficient simulation of the magnetic fields produced by different candidate arrangements of magnets in an ID (figure 1) and provides an optimization framework for swapping and adjusting magnets within the ID to efficiently see how these changes would affect the magnetic field of the real device.</p>
<h2 id="synchrotronsinsertiondevicesandmagnets">Synchrotrons, Insertion Devices, and Magnets</h2>
<p>In a synchrotron such as the one at DLS, a storage ring composed of a high-energy and tightly focused electron beam (3 GeV at DLS) is kept in a stable orbit around the ring at close to the speed of light. As the electron beam orbits it passes through multiple magnetic bending sections that curve the beam around in a complete circle. Between these bending sections there are straight sections several metres in length where IDs can be placed.</p>
<p>An ID will often take the form of a set of rigid metal girders, several metres long, and held just a few millimetres above and below the path of the electron beam (figure 2). These girders are used to hold a series of small but extremely strong permanent magnets where each magnet in order will have its magnetic field aimed in an alternating direction relative to the neighbouring magnets.</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/I03-Girders.jpg" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 2: A photo of the upper and lower girder of a Cryogenic Permanent Magnet Undulator (CPMU), holding many permanent magnets along its length, during construction at Diamond Light Source (DLS).</td>
</tr>
</tbody>
</table>
<hr>
<p>As the electron beam passes along the centre-line path of the ID it is oscillated back and forth in a sinusoidal wave by the alternating magnetic field directions (figures 1 &amp; 3). These oscillations cause the electron beam to dump its energy by emitting photons which continue to travel forwards along the beam path in a straight line to experiment chambers positioned tangentially to the synchrotron ring where the light can be used for scientific applications. The electron beam is then steered away using bending magnets so it can complete its orbit around the synchrotron ring and pass through the IDs again.</p>
<p>By controlling the thickness of each magnet in the ID and their spacing, the period of the oscillations of the electron beam, and therefore the wavelength of the photons that will be emitted can be finely tuned. IDs installed in synchrotrons provide a way to produce extremely bright, focused, and consistent sources of light within a wide range of wavelengths, and are one of the foremost methods for producing powerful hard and soft x-ray beams which are difficult to produce at such high intensities and purities by other means.</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/PPM-Antisymmetric.png" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 3: A diagram showing the arrangement of magnets at the start (left) and end (right) of a Pure Permanent Magnet (PPM) Insertion Device. Arrows denote the major magnetic field direction of the magnet in each slot of the device. The green horizontal line denotes the path of the synchrotron electron beam which runs along the centre line of the ID.</td>
</tr>
</tbody>
</table>
<hr>
<p>The magnets used to build an ID (figure 4) are in general manufactured by metal sintering, a form of 3D printing. These magnets are built to extremely accurate size and magnetization tolerances. However, deviances in manufacturing or minor scratches and damage will always exist as the magnets are brittle and under significant internal stresses from their magnetization.</p>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/I24-Magnets.jpg" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 4: A set of sintered permanent magnets (gold coloured) installed in a section of the lower girder of a Cryogenic Permanent Magnet Undulator (CPMU) constructed at Diamond Light Source (DLS).</td>
</tr>
</tbody>
</table>
<hr>
<p>The effect of these minor deviances over the several metres of length of an ID accumulate with one another as an electron passes along the device and can lead to degradation of quality of the photon beam by either not being tightly focused enough or emitting a range of different wavelengths of light rather than being focused around the single pure wavelength that was desired, or by degrading the quality of the electron beam by allowing it to drift away from the centre line of the ID preventing it from completing its orbit around the synchrotron.</p>
<h2 id="optidaframeworkforsimulatingandoptimizingformagnetarrangements">Opt-ID: A Framework for Simulating and Optimizing for Magnet Arrangements</h2>
<p>To compensate for the deviances in each magnet, the Opt-ID (Optimization of Insertion Devices) software has been developed by RFI and DLS to search for an optimal arrangement that available magnets can be placed in such that all of their individual errors and deviances cancel out one another yielding the brightest and most focused photon beam that is possible at the desired wavelength, while ensuring that the synchrotron electron beam quality is also maintained.</p>
<p>Opt-ID provides an extensible framework for modelling and simulating (figures 1 &amp; 3) the magnet configurations of different types of ID, for mutating candidate magnet arrangements through operations such as magnet swaps, flips, and insertions, and for orchestrating the parallel optimization of multiple candidate orderings of magnets using swarm optimization. Using precise measurements of each magnet’s properties taken by the manufacturers and the ID specialists at DLS we can simulate the magnetic field contribution of each magnet if it were placed in each location of the ID.</p>
<p>An ID will consist of repeating patterns of different types of magnets, such as horizontally or vertically aligned magnets, or magnet types with different physical dimensions (orange and purple “end” magnets in figure 3). These patterns of different magnet types will be constant and fixed, but any candidate magnet of a matching type can be potentially placed in any slot for magnets of that type, often in either a flipped or un-flipped orientation.</p>
<p>This means that a hypothetical ID with two types of magnets (horizontal and vertical), 100 candidate magnets of each type, 50 magnet slots for each type of magnet in the ID, where slots can accept each magnet in either a flipped or unflipped state, there are <code>(100 choose 50)^2 + 2^51 = 10^58</code> possible magnet arrangements that could be constructed.</p>
<p>It is intractable to simulate every possible magnet arrangement to find an optimal one, even for relatively small devices like the one described above. In practice, real IDs are often much larger than the one described above and contain more than two types of magnet.</p>
<h2 id="swarmoptimizationandartificialimmunesystems">Swarm Optimization and Artificial Immune Systems</h2>
<p>To tackle the large combinatorial search space Opt-ID leverages a class of strategies known as swarm optimization. At a high level, swarm strategies are those where multiple unique candidate solutions (different magnet orderings) are optimized together in parallel. The larger the swarm, the more candidates are considered at once, leading to better coverage over the search space of potential solutions. Often information from individual candidates is shared across the population in order to direct the swarm as a whole towards areas of the search space that appear to contain better solutions.</p>
<p>Opt-ID uses a type of swarm optimization called an Artificial Immune System (AIS) which over multiple generations evolve a population of candidate solutions to explore the search space of possible magnet orderings (figure 5):</p>
<ul>
<li>(a) The strategy starts with a population of magnet orderings that are randomly sampled for the first generation.</li>
<li>(b) For each candidate in the swarm multiple child orderings are cloned.</li>
<li>(c) The child orderings are mutated multiple times to explore similar orderings that may or may not be an improvement over the parent.</li>
<li>(d) The performance of each magnet ordering is then evaluated and ranked across the population.</li>
<li>(e) The best candidate orderings are kept to form the population for the next generation.</li>
<li>(f) The algorithm returns to step (a) and repeats the process of cloning and mutating child orderings using the current best population.</li>
</ul>
<table>
<thead>
<tr>
<th style="text-align:center"><img src="https://www.josswhittle.com/content/images/2020/09/AIS.png" alt="Opt-ID: A System for Simulating and Optimizing Synchrotron Insertion Devices through Swarm Optimization"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">Figure 5: A diagram showing the stages of the Artificial Immune System (AIS) algorithm.</td>
</tr>
</tbody>
</table>
<hr>
<p>AIS provides a flexible and extensible framework for structuring combinatorial optimization strategies. The types of mutation that are applied can be crafted to be specific to the problem being solved. Mutations in Opt-ID generally mimic the types of operation the specialist ID physicists and engineers would perform when they manually tune an ID such as swapping two magnets, or flipping a magnet in its current location.</p>
<p>As can be seen in steps (b) and (c) in figure 5, the number of children generated from a parent ordering and the number of mutations performed on the children can vary and is in-practice controlled by heuristics that balance encouraging exploration of the search space with the stability of the optimization. The best performing candidates are given more clones to direct additional computation into searching around these orderings. Because those orderings are known to be good, a small number of mutations are applied to each of them to avoid damaging the properties that made them perform well. Conversely, the worse performing orderings in the population are given fewer clones which are mutated more times in the hope they will improve and move up in the ranking.</p>
<p>With each successive generation of the AIS algorithm it is possible for the candidates to survive to the next generation if they remain good enough compared to the rest of the population. If a parent's children do not outperform it after they are mutated then the parent will be more likely to survive to the next generation. When this happens repeatedly it indicates that the parent has found a either the global or a local minima in the search space where all of the surrounding candidates perform worse. If the parent is not currently the best performing candidate in the population then we know it must be a local minima because there are at least some areas of the search space that perform better. It would be preferable in such cases to remove the “stale” parent candidate so we can spend more time exploring elsewhere in the search space where we may be able to continue to improve. This is performed by tracking an age for each candidate that starts at zero when it is cloned from its parent and increases by one for every generation that it survives. If a candidate's age passes a threshold and the candidate is not the best one in the population it is removed from the population.</p>
<h2 id="projectaims">Project Aims</h2>
<p>A goal for the Opt-ID project is to continue to make the software flexible and extensible so that it can be used effectively at other synchrotron and FEL facilities around the world, and to allow for the optimization of new designs of state-of-the-art IDs.</p>
<p>We are looking at GPU acceleration as a method to increase the efficiency of Opt-ID so that larger and more complicated IDs can be optimized in less time.</p>
<p>The “reality gap” is a common issue in simulated optimization domains and relates to the difference between in-simulation results and observed real world results. We plan to adapt Opt-ID with automatic registration techniques so that real world magnetic field measurements can be easily incorporated into the simulation process so that we can narrow the reality gap.</p>
<h2 id="code">Code</h2>
<p>Opt-ID is released Open Source under the Apache-2.0 License, the code can be found on Github: <a href="https://github.com/DiamondLightSource/Opt-ID">https://github.com/DiamondLightSource/Opt-ID</a></p>
<h2 id="collaborators">Collaborators</h2>
<p>Development of Opt-ID began as a project at DLS and is now continued as a collaboration between the AI team at RFI (Dr Joss Whittle, Dr Mark Basham) and the ID build team at DLS (Dr Zena Patel, Dr Geetanjali Sharma, Dr Stephen Milward) with additional collaboration with BESSY II (Ed Rial).</p>
</div>]]></content:encoded></item><item><title><![CDATA[Interactive Spirograph in Jupyter]]></title><description><![CDATA[<div class="kg-card-markdown"><p><img src="https://www.josswhittle.com/content/images/2019/01/output.png" alt="output"></p>
<pre><code class="language-python">import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
import traceback

def circle(x, y, r, rot, steps=100):
    a = rot + np.linspace(0., 2. * np.pi, steps)
    return np.append((np.cos(a) * r) + x, x), np.append((np.sin(a) * r) + y, y)

class Element:
    
    def</code></pre></div>]]></description><link>https://www.josswhittle.com/interactive-spirograph-in-juptyer/</link><guid isPermaLink="false">5c317122ae776a15361dcd82</guid><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sun, 06 Jan 2019 03:20:01 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><img src="https://www.josswhittle.com/content/images/2019/01/output.png" alt="output"></p>
<pre><code class="language-python">import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
import traceback

def circle(x, y, r, rot, steps=100):
    a = rot + np.linspace(0., 2. * np.pi, steps)
    return np.append((np.cos(a) * r) + x, x), np.append((np.sin(a) * r) + y, y)

class Element:
    
    def __init__(self, rate, edge_radius, path_radius, path_offset, tracers):
        self.centre_x     = 0
        self.centre_y     = 0
        self.path_x       = 0
        self.path_y       = 0
        self.edge_radius  = edge_radius
        self.edge_handle, = plt.plot([],[], '-')
            
        self.path_radius  = path_radius
        self.path_offset  = path_offset
        self.path_handle, = plt.plot([],[], ':')
        
        self.tracers = tracers
        self.tracer_handles = []
        self.tracer_paths   = []
        for colour, fmt, _ in self.tracers:
            hnd, = plt.plot([],[], fmt, color=colour)
            self.tracer_handles += [ hnd ]
            self.tracer_paths   += [ [[], []] ]
        
        self.parent    = None
        self.children  = []
        self.rate      = rate
        self.position  = 0
        self.rotation  = 0
        self.acc_rotation = 0
    
    def step(self, show_elements):
        
        self.rotation += self.rate
        
        if (self.parent is not None): 
            self.acc_rotation = self.rotation + self.parent.acc_rotation
            
            if (self.parent.path_radius &lt;= self.parent.edge_radius):
                offset = self.parent.path_radius - self.edge_radius
                delta  = (self.rate * self.edge_radius) / self.parent.path_radius
            else:
                offset = self.parent.path_radius + self.edge_radius
                delta  = -((self.rate * self.edge_radius) / self.parent.path_radius)
                
            self.position -= delta*2
        
            self.centre_x = self.parent.path_x + np.cos(self.position) * offset
            self.centre_y = self.parent.path_y + np.sin(self.position) * offset
        
        if show_elements:
            self.edge_handle.set_data(*circle(self.centre_x, 
                                              self.centre_y, 
                                              self.edge_radius, 
                                              self.acc_rotation))
        else:
            self.edge_handle.set_data([],[])
        
        self.path_x = self.centre_x + np.cos(self.acc_rotation) * self.path_offset
        self.path_y = self.centre_y + np.sin(self.acc_rotation) * self.path_offset
        
        if show_elements:
            self.path_handle.set_data(*circle(self.path_x, 
                                              self.path_y, 
                                              self.path_radius, 
                                              self.acc_rotation))
        else:
            self.path_handle.set_data([],[])
        
        for hnd, path, (_, _, offset) in zip(self.tracer_handles, 
                                                    self.tracer_paths, 
                                                    self.tracers):
            
            path[0] += [ self.path_x + np.cos(self.acc_rotation) * offset ]
            path[1] += [ self.path_y + np.sin(self.acc_rotation) * offset ]
            hnd.set_data(*path)
            
        for child in self.children:
            child.step(show_elements)
            
    def add(self, element):
        self.children += [ element ]
        self.children[-1].parent = self
    
class UI:
    def __init__(self):
        
        self.show_elements = True
        figw, figh, figdpi = 950, 950, 50
        self.figure = plt.figure(facecolor='w', figsize=(figw/figdpi, figh/figdpi), dpi=figdpi)
        self.axis   = plt.gca()
        plt.axis('off')
        plt.xlim([-1,1])
        plt.ylim([-1,1])
        
        radius = 0.45
        num_pens = 10
        cmap = [plt.get_cmap('RdBu')(int(idx)) 
                for idx in np.linspace(64, 256-64, num_pens)]
        
        pens = [(colour, '-', radius-delta) 
                for colour, delta in zip(cmap, np.linspace(0.1,0.3,num_pens))]
                
        elem_0            = Element(0.05, radius, radius, 0.0, pens)
        self.root_element = Element(0.0, 1.0, 1.0, 0.0, [])
        self.root_element.add(elem_0)
        
        plt.show()
    
    def on_click(self, event):
        if not (event.inaxes == self.axis): return
        plt.sca(self.axis)
        
        try:
            self.show_elements = not self.show_elements
            self.root_element.step(self.show_elements)      
        except Exception:
            plt.title(traceback.format_exc())
        
    def on_move(self, event):
        if not (event.inaxes == self.axis): return
        plt.sca(self.axis)
        
        try:
            self.root_element.step(self.show_elements)      
        except Exception:
            plt.title(traceback.format_exc())
        
    def attach(self, key, func):
        self.figure.canvas.mpl_connect(key, func)

ui = UI()

def on_click(event):
    global ui
    ui.on_click(event)
    
def on_move(event):
    global ui
    ui.on_move(event)
        
ui.attach('button_press_event',  on_click)
ui.attach('motion_notify_event', on_move)
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[CGVC 2018 Talk - A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images]]></title><description><![CDATA[<div class="kg-card-markdown"><p>A video of my talk taken by my co-author Prof. Mark Jones on our paper &quot;A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images&quot; published at CGVC 2018.</p>
<iframe width="100% !important" height="600px" src="https://www.youtube.com/embed/BYu3Yq2DwlM" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p>In Full-Reference Image Quality Assessment (FR-IQA) images are compared with ground truth images that are</p></div>]]></description><link>https://www.josswhittle.com/cgvc-2018-talk/</link><guid isPermaLink="false">5ba27f9bae776a15361dcd71</guid><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Wed, 19 Sep 2018 17:21:17 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>A video of my talk taken by my co-author Prof. Mark Jones on our paper &quot;A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images&quot; published at CGVC 2018.</p>
<iframe width="100% !important" height="600px" src="https://www.youtube.com/embed/BYu3Yq2DwlM" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p>In Full-Reference Image Quality Assessment (FR-IQA) images are compared with ground truth images that are known to be of high visual quality. These metrics are utilized in order to rank algorithms under test on their image quality performance. Throughout the progress of Monte Carlo rendering processes we often wish to determine whether images being rendered are of sufficient visual quality, without the availability of a ground truth image. In such cases FR-IQA metrics are not applicable and we instead must utilise No-Reference Image Quality Assessment (NR-IQA) measures to make predictions about the perceived quality of unconverged images. In this work we propose a deep learning approach to NR-IQA, trained specifically on noise from Monte Carlo rendering processes, which significantly outperforms existing NR-IQA methods and can produce quality predictions consistent with FR-IQA measures that have access to ground truth images.</p>
<h4 id="citeas">Cite as:</h4>
<pre><code class="language-bibtex">@inproceedings{Whittle2018, 
    author    = &quot;Whittle, Joss and Jones, Mark W.&quot;, 
    booktitle = &quot;Computer Graphics and Visual Computing (CGVC) 2018&quot;, 
    title     = &quot;A Deep Learning Approach to No-Reference Image Quality Assessment For Monte Carlo Rendered Images&quot;, 
    year      = &quot;2018&quot;, 
    month     = &quot;Sep&quot;,
    pages     = {23--31}, 
    doi       = {10.2312/cgvc.20181204} 
}
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[A CNC Controlled Etch-A-Sketch]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Initial work towards the creation of a <a href="https://en.wikipedia.org/wiki/Numerical_control">CNC</a> modification of a classic <a href="https://en.wikipedia.org/wiki/Etch_A_Sketch">Etch-A-Sketch</a> toy.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/IMG_20180305_172431751.jpg" alt="etchasketch"></p>
<p>The stepper motors used for this project were super cheap, just 12 quid on amazon for 5 small <code>5v</code> stepper motors which each came with their own control board allowing me to control them over <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation">PWM</a></p></div>]]></description><link>https://www.josswhittle.com/a-cnc-controlled-etch-a-sketch/</link><guid isPermaLink="false">5ac93486ae776a15361dcd57</guid><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sat, 07 Apr 2018 21:22:12 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Initial work towards the creation of a <a href="https://en.wikipedia.org/wiki/Numerical_control">CNC</a> modification of a classic <a href="https://en.wikipedia.org/wiki/Etch_A_Sketch">Etch-A-Sketch</a> toy.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/IMG_20180305_172431751.jpg" alt="etchasketch"></p>
<p>The stepper motors used for this project were super cheap, just 12 quid on amazon for 5 small <code>5v</code> stepper motors which each came with their own control board allowing me to control them over <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation">PWM</a> from a Raspberry Pi Model 3.</p>
<p>I created 3D printed parts for mounting the stepper motors to the Etch-A-Sketch and used a laser cutter to create gears out of <code>6mm</code> birch plywood.</p>
<p>Being cheap and low powered the device suffers from being quite slow which is an issue for the Etch-A-Sketch. Slow movement tends to cause material build up on the drawing cursor making the drawn line become increasingly wide as drawing progresses.</p>
<p>With some software tweaking I think some of these issues can be alleviated which should allow for the project to be demonstrated at department open days for prospective students considering studying at Swansea.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Visualizing the Growth Pattern of a Poisson Disk Sampler]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This project was inspired by the fantastic <a href="https://medium.com/@fogleman/pen-plotter-programming-the-basics-ec0407ab5929">pen plotter visualizations created by Michael Fogleman</a>.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/poisson.png" alt="poisson"></p>
<p><a href="https://en.wikipedia.org/wiki/Supersampling#Poisson_disc">Poisson Disk Sampling</a> is a technique for drawing batches of <a href="https://en.wikipedia.org/wiki/Colors_of_noise#Blue_noise">blue noise</a> distributed samples from an n-dimensional domain. The method works by selecting an initial, seed, point and proposing <code>k</code> (the branching factor) random points within</p></div>]]></description><link>https://www.josswhittle.com/visualizing-the-growth-pattern-of-a-poisson-disk-sampler/</link><guid isPermaLink="false">5ac92c4eae776a15361dcd54</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sat, 07 Apr 2018 21:02:07 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This project was inspired by the fantastic <a href="https://medium.com/@fogleman/pen-plotter-programming-the-basics-ec0407ab5929">pen plotter visualizations created by Michael Fogleman</a>.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/poisson.png" alt="poisson"></p>
<p><a href="https://en.wikipedia.org/wiki/Supersampling#Poisson_disc">Poisson Disk Sampling</a> is a technique for drawing batches of <a href="https://en.wikipedia.org/wiki/Colors_of_noise#Blue_noise">blue noise</a> distributed samples from an n-dimensional domain. The method works by selecting an initial, seed, point and proposing <code>k</code> (the branching factor) random points within 1 to 2 radii <code>r</code> of the initial point. For each of these proposal points we test whether they are closer than the threshold radius <code>r</code> from any of the accepted points (initially just the seed point). If the point is far enough away from any of the accepted points, it becomes accepted, and we sample <code>k</code> new points around it for later processing. If the point is too close to another accepted point it is immediately discarded.</p>
<p>By continuing the process for a given number of accepted points, until the n-dimensional space can no longer be filled without points being closer than the threshold <code>r</code>, or some other criteria is met we end up with a set of well distributed points that have nice mathematical properties when used in stochastic approximation methods.</p>
<p>An interesting observation is that the set of sample points is &quot;grown&quot; outwards from the seed point, and that each accepted point can trace its origin to a single parent point which spawned it. If we connect the sampled points as a tree hierarchy we can visualize the growth pattern of sample set as a tree.</p>
<p>The implementation I used to generate the above image used the sampling algorithm described in <a href="https://www.cct.lsu.edu/~fharhad/ganbatte/siggraph2007/CD2/content/sketches/0250.pdf">Fast Poisson Disk Sampling in Arbitrary Dimension, Robert Bridson 2007</a> which can produce batches of well distributed samples in <code>O(n)</code> computation time.</p>
<p>I have released the code for this project open source <a href="https://gist.github.com/JossWhittle/e3481a1f27852b11a17939931f3c21b0">as a jupyter notebook</a>. The main bottleneck in this code is actually the line plotting of the sample tree due to limitations of <a href="https://matplotlib.org/">Matplotlib</a>. With a better method of drawing the generated trees larger and deeper growth patterns could easily be visualized.</p>
</div>]]></content:encoded></item><item><title><![CDATA[AC-GAN, Auxiliary Classifier Generative Adversarial Networks]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In this project I implemented the paper <a href="https://arxiv.org/abs/1610.09585">Conditional Image Synthesis With Auxiliary Classifier GANs, Odena et. al. 2016</a> using the <a href="https://keras.io/">Keras</a> machine learning framework.</p>
<p><a href="https://arxiv.org/abs/1406.2661">Generative Adversarial Networks, Goodfellow et. al. 2014</a> represents a training regime for teaching neural networks how to synthesize data that could plausibly have come from a</p></div>]]></description><link>https://www.josswhittle.com/an-implementation-of-ac-gan/</link><guid isPermaLink="false">5ac8f2beae776a15361dcd42</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Wed, 12 Apr 2017 16:32:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>In this project I implemented the paper <a href="https://arxiv.org/abs/1610.09585">Conditional Image Synthesis With Auxiliary Classifier GANs, Odena et. al. 2016</a> using the <a href="https://keras.io/">Keras</a> machine learning framework.</p>
<p><a href="https://arxiv.org/abs/1406.2661">Generative Adversarial Networks, Goodfellow et. al. 2014</a> represents a training regime for teaching neural networks how to synthesize data that could plausibly have come from a distribution of real data - commonly images with a shared theme or aesthetic style such as images of celebrity faces (<a href="http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html">CelebA</a>), of handwritten digits (<a href="http://yann.lecun.com/exdb/mnist/">MNIST</a>), or of bedrooms (<a href="http://lsun.cs.princeton.edu/2017/">LSUN-Bedroom</a>).</p>
<p>In GANs two models are trained - a generative model that progressively learns to synthesize realistic and plausible images from a random noise input (the latent vector) - and a discriminative model that learns to tell these generated (fake) images from real images sampled from the target dataset. The two models are trained in lock-step such that the generative model learns to fool the discriminator model, and the discriminator adapts to become better at not being fooled by the generator.</p>
<p>This forms a <a href="https://en.wikipedia.org/wiki/Minimax">minimax</a> game between the two models which converges to a <a href="https://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a>. At this point the generator should be able to consistently produce convincing images that appear to be from the original dataset, but are in-fact parameterized by the latent vector fed to the generative model.</p>
<p>Auxiliary Classifier GANs extend the standard GAN architecture by jointly minimizing the generators ability to fool the discriminative model, with the ability of the discriminator to correctly identify which digit it was shown. This allows the generative model to be parameterized not only by a random latent vector, but also a representative encoding of which digit we would like it to synthesize.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/ca-gan3.png" alt="ac-gan"></p>
<p>The above image shows the result of my AC-GAN implementation trained on the MNIST dataset. On the left we see real images sampled randomly from MNIST for each of the 10 digit classes, and on the right we see images synthesized by the generative model for each class. The generated images are not sampled completely randomly, in this image I was selecting a random value of the latent vector and sweeping it from a value of 0 to 1. We can see that for each digit class the had the subtle effect of adjusting rotation and &quot;flair&quot; or perhaps &quot;serif-ness&quot;, showing that the generative model has mapped the space of possible values that exist in the latent vector to different stylistic traits of the produced digits.</p>
<p>The results of this experiment are satisfying but not great overall. I believe the model suffers from, at least partial, &quot;mode collapse&quot; where the generator learns to produce a subset of possible stylistic variations convincingly and so never attempts to learn how to produce other stylistic variants.</p>
<p>Since the publication of Goodfellow's seminal work on GANs <a href="http://guimperarnau.com/blog/2017/03/Fantastic-GANs-and-where-to-find-them">many</a> <a href="http://guimperarnau.com/blog/2017/11/Fantastic-GANs-and-where-to-find-them-II">variations</a> have been proposed that attempt to solve common issues such as mode collapse and training stability.</p>
<p>In the future I plan to revisit this project and implement some of the newer and more advanced methods. While the code for this project is written as a jupyter notebook I do not plan to release the code as it is not very clean or well documented. I will however release well documented code when I revisit this project.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Neural Artistic Style Transfer]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In this project I implemented the paper <a href="https://arxiv.org/abs/1508.06576">A Neural Algorithm of Artistic Style, Gatys et. al. 2015</a> using the <a href="https://keras.io/">Keras</a> machine learning framework.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/cat-amuse-combined.png" alt="cat-amuse-combined"><br>
Cat photo credit: Claire Whittle</p>
<p>My implementation was loosely based on the fantastic <a href="https://github.com/keras-team/keras/blob/master/examples/neural_style_transfer.py">Keras example code by Francois Chollet</a>. In my implementation I modifed the <a href="https://arxiv.org/pdf/1409.1556.pdf">VGG19</a> architecture</p></div>]]></description><link>https://www.josswhittle.com/neural-artistic-style-transfer/</link><guid isPermaLink="false">5ac7ea34ce73317532cec625</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Mon, 03 Apr 2017 22:51:00 GMT</pubDate><media:content url="https://www.josswhittle.com/content/images/2018/04/boat1-starrynight-combined-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://www.josswhittle.com/content/images/2018/04/boat1-starrynight-combined-2.png" alt="Neural Artistic Style Transfer"><p>In this project I implemented the paper <a href="https://arxiv.org/abs/1508.06576">A Neural Algorithm of Artistic Style, Gatys et. al. 2015</a> using the <a href="https://keras.io/">Keras</a> machine learning framework.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/cat-amuse-combined.png" alt="Neural Artistic Style Transfer"><br>
Cat photo credit: Claire Whittle</p>
<p>My implementation was loosely based on the fantastic <a href="https://github.com/keras-team/keras/blob/master/examples/neural_style_transfer.py">Keras example code by Francois Chollet</a>. In my implementation I modifed the <a href="https://arxiv.org/pdf/1409.1556.pdf">VGG19</a> architecture using pre-trained weights trained on <a href="http://www.image-net.org/">ImageNet</a>. I replace the maximum pooling layers with average pooling using the same strides and discard the fully connected layers at the end of the network as they are not needed and take up unecessary memory on the GPU.</p>
<p>In Francois' code he makes use of the <a href="https://www.scipy.org/">SciPy</a> <a href="https://en.wikipedia.org/wiki/Limited-memory_BFGS">L-BFGS</a> optimizer. While this produced nice results in a small number of iterations I found that the high memory requirement of L-BFGS (even though the L stands for <em>Limited-memory</em>) was prohibitive in producing images of a resolution higher than around <code>400x400</code>. Through experimentation I found that the SciPy <a href="https://en.wikipedia.org/wiki/Conjugate_gradient_method">Conjugate Gradient</a> optimizer provided good results with greatly reduced memory complexity, allowing me to raise the resolution of produced images to around <code>720p</code> on a single NVidia 870m GPU.</p>
<p>I plan to revisit this project in the future implementing it entirely in <a href="https://www.tensorflow.org/">Tensorflow</a>. I may also investigate newer and more advanced methods that have been proposed since the publication of Gatys' seminal paper in this area.</p>
<p>Full code for this project is available <a href="https://gist.github.com/JossWhittle/4719ede7e961e3143230674ec74bfcd0">here as a Gist</a>.</p>
<p>In the remainder of this post I will show some of the images that I produced with the linked code.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/boat1-starrynight-combined-1.png" alt="Neural Artistic Style Transfer"><br>
Boat photo credit: John Whittle</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/boat2-starrynight-combined.png" alt="Neural Artistic Style Transfer"><br>
Boat photo credit: John Whittle</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/cat-CompositionIV-combined.png" alt="Neural Artistic Style Transfer"><br>
Cat photo credit: Claire Whittle</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/cat-CompositionX-combined.png" alt="Neural Artistic Style Transfer"><br>
Cat photo credit: Claire Whittle</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/cat-picasso-combined.png" alt="Neural Artistic Style Transfer"><br>
Cat photo credit: Claire Whittle</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/river-CompositionIV-combined.png" alt="Neural Artistic Style Transfer"><br>
River photo credit: Taken from the original paper</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/river-CompositionX-combined.png" alt="Neural Artistic Style Transfer"><br>
River photo credit: Taken from the original paper</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/river-picasso-combined.png" alt="Neural Artistic Style Transfer"><br>
River photo credit: Taken from the original paper.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/river-starrynight-combined.png" alt="Neural Artistic Style Transfer"><br>
River photo credit: Taken from the original paper</p>
</div>]]></content:encoded></item><item><title><![CDATA[Bunny Vase with Liquid]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/bunny-vase-1.jpg" alt="bunny-vase"></p>
<p>Here I simulate the interactions of light coming from a diffuse area light source, scattering through a glass model of the <a href="https://en.wikipedia.org/wiki/Stanford_bunny">Stanford Bunny</a> which has been modified to have both inner and outer</p></div>]]></description><link>https://www.josswhittle.com/bunny-vase-with-liquid/</link><guid isPermaLink="false">5ac7fc7bce73317532cec629</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Mon, 02 May 2016 23:02:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/bunny-vase-1.jpg" alt="bunny-vase"></p>
<p>Here I simulate the interactions of light coming from a diffuse area light source, scattering through a glass model of the <a href="https://en.wikipedia.org/wiki/Stanford_bunny">Stanford Bunny</a> which has been modified to have both inner and outer walls, and filled with a simulated wine like liquid. In the foreground, within the shadow of the bunny, we can see <a href="https://en.wikipedia.org/wiki/Caustic_(optics)">caustic illumination</a> patterns where light has been tinted and focused as it undergoes refraction as it passes through the glass and liquid mediums. In the background of the image the grid pattern on the floor becomes blurred and out of focus due to the physical simulation of light interactions with a <a href="https://en.wikipedia.org/wiki/Aperture">camera aperture</a> and lens elements. Similarly, the aperture simulation can be seen in the form of small hexagonal specular highlights on the bunnies ears which occur due to the aperture being modeled as a six sided polygon.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Geometric Instancing]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/dragons-4.jpg" alt="dragons-4"></p>
<p>In ray tracing based rendering algorithms the main performance bottleneck comes from the time it takes to perform intersection tests between rays and the scene geometry. As the number of elements used to</p></div>]]></description><link>https://www.josswhittle.com/geometric-instancing/</link><guid isPermaLink="false">5ac7fe16ce73317532cec62c</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Tue, 05 May 2015 23:09:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/dragons-4.jpg" alt="dragons-4"></p>
<p>In ray tracing based rendering algorithms the main performance bottleneck comes from the time it takes to perform intersection tests between rays and the scene geometry. As the number of elements used to represent surfaces within the scene increases so to does the computation complexity of performing these intersection tests.</p>
<p>Rather than testing a given ray against each surface element present in the scene, which would scale poorly with <code>O(n)</code> complexity, acceleration structures such as the <a href="https://en.wikipedia.org/wiki/K-d_tree">KD-Tree</a> and <a href="https://en.wikipedia.org/wiki/Bounding_volume_hierarchy">BHV-Tree</a> can be used to reduce the number of intersection tests to roughly <code>O(log n)</code> on average. This is accomplished by traversing a binary-tree like hierarchy performing a comparatively cheap intersection test between a ray and a cutting plane or axis-aligned bounding-box. When the leaf nodes of the tree have been reached only the mesh elements contained within the leaf node need to be tested against the given ray. After an initial intersected element has been found all nodes of the tree which can only contain elements that would have to be further away than the current <em>closest intersection</em> can be ignored as the tree traversal continues, yielding significant performance gains.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Stanford_dragon">Stanford Dragon</a> model shown in this image contains 871,414 triangular mesh elements. In the above image, 99 copies of the dragon are present. By a naive scene construction method this would total 86,269,986 triangles. This has the effect of greatly increasing the memory requirements and size of the resulting BVH-Tree. In an ideal world we would only store the dragon mesh and a BVH-Tree over its elements once, and have a system for replaying this data efficiently in arbitrary positions, rotations, and scales.</p>
<p>To this end I have implemented geometric instancing as a feature within the rendering software developed for my PhD research. This is done through a coordinate-space transformation and cross-linking injected into the acceleration structure traversal. The dragon mesh is loaded into the renderer, and a BVH-Tree is constructed for the single dragon model. The root node of the BVH-Tree's bounding box is then duplicated through independent affine-transformations into the size, rotation, and position of each of the desired dragons. Another BVH-Tree is then constructed over the these bounding boxes, along with the triangles making up the floor, walls, and light source of the scene. During traversal of the BVH-Tree, when one of the transformed bounding boxes is queried, the given ray is transformed by the inverse of the affine-transformation matrix for the current bounding box. This inverse transformed ray is then intersected with the BVH-Tree containing only the dragon mesh. The resulting intersection location can be found by scaling the original ray by the resulting distance to the closest intersection found, regardless of whether the intersection was within one of the nested bounding boxes.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/bunny-final-bdpt.jpg" alt="bunny-final-bdpt"></p>
<p>With some additional work, each instanced mesh can have different materials applied to it through a stack based shading hierarchy which is also tracked during BVH-Tree traversal.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Farming for Pixels – A Teaching Lab becomes a Computational Cluster]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm in submission of the 2015, SURF Research as Art Competition, hosted for reasearcher at Swansea University.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/vases.jpg" alt="vases"></p>
<p>The image is rendered at 4k resolution and was rendered in approximately 8 hours to a high</p></div>]]></description><link>https://www.josswhittle.com/farming-for-pixels-a-teaching-lab-becomes-a-computational-cluster/</link><guid isPermaLink="false">5ac91dfeae776a15361dcd46</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sun, 26 Apr 2015 19:46:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm in submission of the 2015, SURF Research as Art Competition, hosted for reasearcher at Swansea University.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/vases.jpg" alt="vases"></p>
<p>The image is rendered at 4k resolution and was rendered in approximately 8 hours to a high degree of convergence by utilizing 50 of the machines in the Swansea Computer Science Department's linux teaching lab as a computational cluster using the <a href="https://en.wikipedia.org/wiki/Message_Passing_Interface">Message Passing Interface (MPI)</a> framework to implement multi-node parallism into my rendering software.</p>
</div>]]></content:encoded></item><item><title><![CDATA[An ode to Pixar Renderman’s - Physically Plausible Pig]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/ppp.jpg" alt="ppp"></p>
<p>The pig model is shaded with my interpretation of a glazed ceramic material. This combines a diffuse ceramic substrate with a glossy overlayer which are Fresnel blended during shading computation. The glaze overlay</p></div>]]></description><link>https://www.josswhittle.com/physically-plausible-pig/</link><guid isPermaLink="false">5ac9236aae776a15361dcd49</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Sun, 08 Mar 2015 20:01:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/ppp.jpg" alt="ppp"></p>
<p>The pig model is shaded with my interpretation of a glazed ceramic material. This combines a diffuse ceramic substrate with a glossy overlayer which are Fresnel blended during shading computation. The glaze overlay has a normal map applied to it for adding detailed surface variation.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Anisotropic Metal]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/glossy.jpg" alt="glossy"></p>
<p>Just a quick render after implementing an Anisotropic Metal material in my renderer. The test scene was inspired by one I saw on <a href="http://www.kevinbeason.com/worklog/2009/03/20/glossy-reflections/">Kevin Beason's worklog</a> blog.</p>
</div>]]></description><link>https://www.josswhittle.com/anisotropic-metal/</link><guid isPermaLink="false">5ac925a0ae776a15361dcd4c</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Thu, 12 Feb 2015 20:11:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This image was generated using a renderer developed during my PhD using the <a href="https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing">bidirectional path tracing</a> algorithm.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/glossy.jpg" alt="glossy"></p>
<p>Just a quick render after implementing an Anisotropic Metal material in my renderer. The test scene was inspired by one I saw on <a href="http://www.kevinbeason.com/worklog/2009/03/20/glossy-reflections/">Kevin Beason's worklog</a> blog.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Concentric Disk Sampling]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Yesterday I stumbled upon a lesser known and far superior method for mapping points from a square to a disk. The common approach which is presented to you after a quick google for Draw Random Samples on a Disk is the clean and simple mapping from Cartesian to Polar coordinates;</p></div>]]></description><link>https://www.josswhittle.com/concentric-disk-sampling/</link><guid isPermaLink="false">5ac92675ae776a15361dcd4f</guid><category><![CDATA[Computer Graphics]]></category><dc:creator><![CDATA[Joss Whittle]]></dc:creator><pubDate>Fri, 26 Dec 2014 20:14:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Yesterday I stumbled upon a lesser known and far superior method for mapping points from a square to a disk. The common approach which is presented to you after a quick google for Draw Random Samples on a Disk is the clean and simple mapping from Cartesian to Polar coordinates; i.e.</p>
<p>Given a disk centered origin <code>(0,0)</code> with radius <code>r</code></p>
<pre><code>// Draw two uniform random numbers in the range [0,1)
R1 = RAND(0,1);
R2 = RAND(0,1);

// Map these values to polar space (phi,radius)
phi = R1 * 2PI;
radius = R2 * r;

// Map (phi,radius) in polar space to (x,y) in Cartesian space
x = cos(phi) * radius;
y = sin(phi) * radius;
</code></pre>
<p>The result of this sampling on a regular grid of samples is shown in the image below. The left plot shows the input points as simple ordered pairs in the range <code>[0,1)^2</code>, while the right plot shows these same points (colour for colour) mapped onto a unit disk using Polar mapping as described above.</p>
<p><img src="https://www.josswhittle.com/content/images/2018/04/Polar.png" alt="Polar"></p>
<p>As you can see the mapping is not ideal with many points being over-sampled at the poles (I wonder why they call is Polar coordinates), and with areas towards the radius left under-sampled. What we would actually like is a disk sampling strategy that keeps the uniformity seen in the square distribution while mapping the points onto the disk.</p>
<p>Enter, Concentric Disk Sampling. <a href="http://pdfs.semanticscholar.org/4322/6a3916a85025acbb3a58c17f6dc0756b35ac.pdf">A Low Distortion Map Between Disk and Square, Shirley &amp; Chiu 1997</a> presents the idea for warping the unit square into that of a unit circle. Their method is nice but it contains a lot of nested branching for determining which quadrant the current point lays within. <a href="http://psgraphics.blogspot.com/2011/01/improved-code-for-concentric-map.html">Shirley mentions an improved variant of this mapping on his blog, accredited to Dave Cline</a>. Cline's method only uses one if-else branch and is simpler to implement.</p>
<p>Again, given a disk centered origin <code>(0,0)</code> with radius <code>r</code></p>
<pre><code>// Draw two uniform random numbers in the range [0,1)
R1 = RAND(0,1);
R2 = RAND(0,1);

// Avoid a potential divide by zero
if (R1 == 0 &amp;&amp; R2 == 0) {
    x = 0; y = 0;
    return;
}

// Initial mapping
phi = 0; radius = r;
a = (2 * R1) - 1;
b = (2 * R2) - 1;

// Uses squares instead of absolute values
if ((a*a) &gt; (b*b)) { 
    // Top half
    radius  *= a;
    phi = (pi/4) * (b/a);
}
else {
    // Bottom half
    radius *= b;
    phi = (pi/2) - ((pi/4) * (a/b)); 
}

// Map the distorted Polar coordinates (phi,radius) 
// into the Cartesian (x,y) space
x = cos(phi) * radius;
y = sin(phi) * radius;
</code></pre>
<p><img src="https://www.josswhittle.com/content/images/2018/04/Concentric.png" alt="Concentric"></p>
<p>This gives a uniform distribution of samples over the disk in Cartesian space. The result of the mapping applied to the same set of uniform square samples is shown above. Notice how we now get full coverage of the disk using just as many samples, and that each point has (relatively) equal distance to all of it's neighbors, meaning no bunching at the poles, and no under-sampling at the fringe.</p>
<p>I've applied this sampling technique to my path tracer as a means of sampling the aperture of the virtual camera when computing depth of field. Convergence to the true out-of-focus light distribution is now much faster and more accurate than it was with Polar sampling which, due to oversampling at the poles, cause a disproportionate number of rays to be fired along trajectories very close to the true ray.</p>
</div>]]></content:encoded></item></channel></rss>