Skip to main content
Jano van Hemert
  • National e-Science Centre
    15 South College Street
    Edinburgh EH8 9AA
    United Kingdom
  • +44 131 650 9820
  • I am the Director of Research & Data Science at Optos (a Nikon company) where I direct a group of researchers and eng... more
    (I am the Director of Research &amp; Data Science at Optos (a Nikon company) where I direct a group of researchers and engineers to develop novel technology for retinal imaging in eye healthcare. I lead academic and industrial relationships from proposal through R&amp;D into clinical trials with the aim to commercialise research from partnering universities and businesses globally. I lead the strategy and implementation of the intellectual property portfolio.<br /><br />Dr van Hemert received an MSc in 1998 and a PhD in 2002, both from Leiden University, The Netherlands. He arrived in Scotland in 2004 on a Talent Fellowship from the Netherlands Organisation for Scientific Research. Until 2010, he had several posts at the University of Edinburgh where he founded the Data-Intensive Research Group. In 2010 he moved to Optos plc to build a strong collaborative programme with universities both local and global. Since 2010 he has been an Honorary Fellow of the University of Edinburgh. In 2013, he was elevated to Senior Member of the IEEE. In 2018, he was elected a Fellow of the Royal Society of Edinburgh.)
    edit
  • Joost Kok, Thomas Bäck, Guszti Eibenedit
The classification of blood vessels into arterioles and venules is a fundamental step in the automatic investigation of retinal biomarkers for systemic diseases. In this paper, we present a novel technique for vessel classification on... more
The classification of blood vessels into arterioles and venules is a fundamental step in the automatic investigation of retinal biomarkers for systemic diseases. In this paper, we present a novel technique for vessel classification on ultra-wide-field-of-view images of the retinal fundus acquired with a scanning laser ophthalmoscope. To the best of our knowledge, this is the first time that a fully automated artery/vein classification technique for this type of retinal imaging with no manual intervention has been presented. The proposed method exploits hand-crafted features based on local vessel intensity and vascular morphology to formulate a graph representation from which a globally optimal separation between the arterial and venular networks is computed by graph cut approach. The technique was tested on three different data sets (one publicly available and two local) and achieved an average classification accuracy of 0.883 in the largest data set.
This paper proposes a novel Adaptive Region-based Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic... more
This paper proposes a novel Adaptive Region-based Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic detection of initial optimum object boundary based on a Region Classification Model (RCM) in a pixel-level multidimensional feature space; 2) an Adaptive Edge Smoothing Update model (AESU) of contour points (e.g. misclassified or irregular points) based on iterative force field calculations with contours obtained from the RCM by minimising energy function (an approach that does not require predefined geometric templates to guide auto-segmentation). Such an approach provides robustness in capturing a range of variations and shapes. We have conducted a comprehensive comparison between our approach and the state-of-the-art existing deformable models and validated it with publicly available datasets. The experimental evaluation shows that the proposed ap...
We demonstrate a multimode detection system in a scanning laser ophthalmoscope (SLO) that enables simultaneous operation in confocal, indirect, and direct modes to permit an agile trade between image contrast and optical sensitivity... more
We demonstrate a multimode detection system in a scanning laser ophthalmoscope (SLO) that enables simultaneous operation in confocal, indirect, and direct modes to permit an agile trade between image contrast and optical sensitivity across the retinal field of view to optimize the overall imaging performance, enabling increased contrast in very wide-field operation. We demonstrate the method on a wide-field SLO employing a hybrid pinhole at its image plane, to yield a twofold increase in vasculature contrast in the central retina compared to its conventional direct mode while retaining high-quality imaging across a wide field of the retina, of up to 200 deg and 20  μm on-axis resolution.
Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging... more
Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our p...
To establish the extent of the peripheral retinal vasculature in normal eyes using ultra-widefield (UWF) fluorescein angiography. Prospective, observational study. Fifty-nine eyes of 31 normal subjects, stratified by age, with no evidence... more
To establish the extent of the peripheral retinal vasculature in normal eyes using ultra-widefield (UWF) fluorescein angiography. Prospective, observational study. Fifty-nine eyes of 31 normal subjects, stratified by age, with no evidence of ocular disease in either eye by history and ophthalmoscopic examination. Ultra-widefield fluorescein angiographic images were captured centrally and with peripheral steering using the Optos 200Tx (Optos, Dunfermline, United Kingdom). Images obtained at different gaze angles were montaged and corrected for peripheral distortion using a stereographic projection method to provide a single image for grading of the peripheral edge of the visible vasculature. The border of the vascularized retina was expressed as a radial surface distance from the center of the optic disc. The vascularized area was calculated based on this mean peripheral border position for each quadrant. Mean distance (mm) from the center of optic disc to the peripheral vascular bor...
This chapter describes the application of data-intensive methods to the automatic identification and annotation of gene expression patterns in the mouse embryo. The first section of the chapter introduces ideas behind modern computational... more
This chapter describes the application of data-intensive methods to the automatic identification and annotation of gene expression patterns in the mouse embryo. The first section of the chapter introduces ideas behind modern computational and systems biology, how the explosion of data in the postgenomic world has led to new possibilities and even greater challenges. The second section talks about the particular computational biology problem and describes annotating images of gene expression with the right anatomical terms in depth. An automated solution based on data-intensive methods is discussed the third section. The final section looks ahead to the biological significance and systems biology application of these approaches and also describes a large-scale challenge and possible series of experiments with a novel data-intensive computational architecture.
Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly... more
Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly suggests the use of grid-based high-performance computing for analysis. TOPP and OpenMS are open-source packages for proteomics data analysis, however, they do not
The automatic allocation of enterprise workload to resources can be enhanced by being able to makewhat-if&#x27;response time predictions, whilst different allocations are being considered. It is important to quantitatively compare the... more
The automatic allocation of enterprise workload to resources can be enhanced by being able to makewhat-if&#x27;response time predictions, whilst different allocations are being considered. It is important to quantitatively compare the effectiveness of different prediction techniques for use in cloud infrastructures. To help make the comparison of relevance to a wide range of possible cloud environments it is useful to consider the following.
The automatic allocation of enterprise workload to resources can be enhanced by being able to make what–if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a... more
The automatic allocation of enterprise workload to resources can be enhanced by being able to make what–if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment.
To facilitate data mining and integration (DMI) processes in a generic way, we investigate a parallel pipeline streaming model. We model a DMI task as a streaming data-flow graph: a directed acyclic graph (DAG) of Processing Elements... more
To facilitate data mining and integration (DMI) processes in a generic way, we investigate a parallel pipeline streaming model. We model a DMI task as a streaming data-flow graph: a directed acyclic graph (DAG) of Processing Elements (PEs). The composition mechanism links PEs via data streams, which may be in memory, buffered via disks or inter-computer data-flows. This makes it

And 61 more