Currently Highlighting


Publications

Regularised Cross-Modal Hashing
In this paper we propose Regularised Cross-Modal Hashing (RCMH) a new cross-modal hashing scheme that projects annotation and visual feature descriptors into a common Hamming space. RCMH optimises the intra-modality similarity of data-points in the annotation modality using an iterative three-step hashing algorithm: in the first step each training image is assigned a K-bit hashcode based on hyperplanes learnt at the previous iteration; in the second step the binary bits are smoothed by a formulation of graph regularisation so that similar data-points have similar bits; in the third step a set of binary classifiers are trained to predict the regularised bits with maximum margin. Visual descriptors are projected into the annotation Hamming space by a set of binary classifiers learnt using the bits of the corresponding annotations as labels. RCMH is shown to consistently improve retrieval effectiveness over state-of-the-art baselines.
Sean Moran, Victor Lavrenko
SIGIR 2015
A Note on Automatic Kernel Carpentry for Atomistic Support of Continuum Stress
Previous work on modelling atomistic support of continuum stress all use an isotropic kernel function: an identical weight is assigned to atoms at the same spatial distance, which is tantamount to a local constant regression model. In this paper we employ a local linear regression model and leverage the mechanism of automatic kernel carpentry to allow for spatial averaging adaptive to the local distribution of atoms. This is of interest for determining atomistic stress at stack- ing faults, interfaces or surfaces. It is shown in this study that for crystalline solids, although the local linear regression model performs elegantly, the ad- ditional computational costs are not justified compared to the local constant regression model.
Manfred Ulz, Sean Moran
arXiv Pre-print
Graph Regularised Hashing
In this paper we propose a two-step iterative scheme, Graph Regularised Hashing (GRH), for incrementally adjusting the positioning of the hashing hypersurfaces to better conform to the supervisory signal: in the first step the binary bits are regularised using a data similarity graph so that similar data points receive similar bits. In the second step the regularised hashcodes form targets for a set of binary classifiers which shift the position of each hypersurface so as to separate opposite bits with maximum margin. GRH exhibits superior retrieval accuracy to competing hashing methods.
Sean Moran, Victor Lavrenko
ECIR 2015 (Oral)
A Sparse Kernel Relevance Model for Automatic Image Annotation
In this work we demonstrate that the standard assignment of kernels to feature types is sub-optimal and a substantially higher image annotation accuracy can be attained by adapting the kernel-feature assignment. We formulate an efficient greedy algorithm to find the best kernel-feature alignment and show that it is able to rapidly find a sparse subset of features that maximises annotation F1 score. In a second contribution we introduce two data-adaptive kernels for image annotation - the generalised Gaussian and multinomial kernels - which we demonstrate can better model the distribution of image features as compared to standard kernels. Evaluation is conducted on three standard image datasets across a selection of different feature representations. The proposed SKL-CRM model is found to attain performance that is competitive to a suite of state-of-the-art image annotation models.
Sean Moran, Victor Lavrenko
IJMIR 2014 (Journal) (Best Papers in Image Retrieval)
Sparse Kernel Learning for Image Annotation
We formulate a sparse kernel learning framework for the CRM, dubbed the SKL-CRM, that greedily selects an optimal combination of kernels. Our kernel learning framework rapidly converges to an annotation accuracy that substantially outperforms a host of state-of-the-art annotation models. We make two surprising conclusions: firstly, if the kernels are chosen correctly, only a very small number of features are required so to achieve superior performance over models that utilise a full suite of feature types; and secondly, the standard default selection of kernels commonly used in the literature is sub-optimal, and it is much better to adapt the kernel choice based on the feature type and image dataset.
Sean Moran, Victor Lavrenko
ICMR 2014 (Oral) (Best Student Paper Winner)
Real-Time Detection, Tracking and Monitoring of Discovered Events in Social Media
We introduce ReDites, a system for realtime event detection, tracking, monitoring and visualisation. It is designed to assist Information Analysts in understanding and exploring complex events as they unfold in the world. Events are automatically detected from the Twitter stream. Then those that are categorised as being security-relevant are tracked, geolocated, summarised and visualised for the end-user. Furthermore, the system tracks changes in emotions over events, signalling possible flashpoints or abatement. We demonstrate the capabilities of ReDites using an extended use case from the September 2013 Westgate shooting incident. Through an evaluation of system latencies, we also show that enriched events are made available for users to explore within seconds of that event occurring.
Osborne, M., Moran, S., McCreadie, R., Von Lunen, A., Sykora, M., Cano, E., Ireson, N., MacDonald, C., Ounis, I., He, Y., Jackson, T., O'Brien, A.
ACL 2014 (Demo)
Optimal Kernel Shape and Bandwidth for Atomistic Support of Continuum Stress
The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. In this paper we propose a strategy for computing the optimal shape and bandwidth for the kernel. We evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point.
Manfred Ulz, Sean Moran
MSMSE 2013 (Journal)
Variable Bit Quantisation for LSH
In this paper we introduce a method for optimally allocating a variable number of bits per LSH hyperplane. Previous approaches assign a constant number of bits per hyperplane. This neglects the fact that some hyperplanes may be more informative than others. Our method, dubbed Variable Bit Quantisation (VBQ), provides a non-uniform bit allocation across hyperplanes. VBQ discovers both the most discriminative set of hyperplanes and allocates those hyperplanes proportionally more bits from a fixed bit budget. VBQ does so in a data-driven manner that is independent of the projection method. Comprehensive evaluation demonstrates that our approach decisively outperforms the state-of-the-art.
Sean Moran, Victor Lavrenko, Miles Osborne
ACL 2013 (Oral)
Neighbourhood Preserving Quantisation for LSH
We introduce a scheme for optimally allocating multiple bits per hyperplane for Locality Sensitive Hashing (LSH). Existing approaches binarise LSH projections by thresholding at zero yielding a single bit per dimension. We demonstrate that this is a sub-optimal bit allocation approach that can easily destroy the neighbourhood structure in the original feature space. Our proposed method, dubbed Neighbourhood Preserving Quantization (NPQ), assigns multiple bits per hyperplane based upon adaptively learned thresholds. NPQ exploits a pairwise affinity matrix to discretise each dimension such that nearest neighbours in the original feature space fall within the same quantisation thresholds and are therefore assigned identical bits. NPQ is not only applicable to LSH, but can also be applied to any low-dimensional projection scheme. Despite using half the number of hyperplanes, NPQ is shown to improve LSH-based retrieval accuracy by up to 65\% compared to the state-of-the-art.
Sean Moran, Victor Lavrenko, Miles Osborne
SIGIR 2013 (Poster)
A Gaussian mixture modelling approach to the data-driven estimation of atomistic support for continuum stress
We motivate the application of Gaussian mixture modelling as a principled probabilistic means of estimating the size of the atomistic space averaging volume directly from the atomistic data. This model enables the discovery of homogeneous sub-populations of atoms in an entirely data-driven manner. The size of the space-averaging volume then follows naturally from the average maximum extent of the sub-populations. Furthermore, we demonstrate how the model can be used to compute the stress at arbitrary continuum points. Thorough evaluation is conducted on a numerical example of an edge dislocation in a single crystal. We find that our results are in excellent agreement with the corresponding analytical solution.
Manfred Ulz, Sean Moran
MSMSE 2012 (Journal) (MSMSE Highlight of 2012)
Optimal Tag Sets for Automatic Image Annotation
In this paper we introduce a new form of the Continuous Relevance Model (the BSCRM) that captures the correlation between tags in a formal and consistent manner. We apply a beam search algorithm to find a near optimal set of mutually correlated tags for an image in a time that is linear in the depth of the search tree. We conduct an examination of the model performance under different kernels for the representation of the image feature distributions and suggest a method of adapting the kernel to the dataset. BS-CRM with a Minkowski kernel is found to significantly increase recall by 42% and precision by 38% over the original CRM model and outperforms more recent baselines on the standard Corel 5k dataset.
Sean Moran, Victor Lavrenko
BMVC 2011 (Oral)
Using the Grid for Satellite Imagery with UNOSAT
This paper describes work which investigates how the Large Hadron Col-lider (LHC) Computing Grid (LCG) can be used to upload, store, compress anddownload satellite image data files on behalf of the United Nations Organization UNOSAT. Mechanisms for satellite image upload, storage and download are out-lined. LCG computational resources are exercised by a JPEG2000 compression framework that transforms UNOSAT’s original GeoTiff images to JPEG2000 ECW files. It is shown how the availability of these functions will aid humanitarian relief efforts by enabling UNOSAT to vastly increase its portfolio of satellite images and to supply images to field workers much more rapidly through pre-compressed files. Importantly, these benefits arise through zero financial cost to UNOSAT.
Sean Moran, Patricia Mendez Lorenzo
1st IEEE International Conference on e-Science and Grid Computing: Deploying Production Grids - Beyond the Hype, Melbourne, Australia (Workshop)
Robust Fusion of Colour Appearance Models for Object Tracking
This paper reports on work which fuses three different appearance models to enable robust tracking of multiple objects on the basis of colour. Short-term variation in object colour is modelled non-parametricallyusing adaptive binning histograms. Appearance changes at intermediate time scales are represented by semi-parametric (Gaussian mixture) models while a parametric subspace method (Robust PCA) is employed to model long term stable appearance. Fusion of the three models is achieved through particle filtering and the Democratic integration method. It is shown how robust estimation and adaptation of the models both individually and in combination results in improved visual tracking accuracy.
Chris Town, Sean Moran
BMVC 2004 (Poster)

Misc Projects

Automatic Image Tagging
Video Inpainting
Linking Video Segments to Relevant Wikipedia Content