surr_code/0000755000175000017500000000000011462353402011453 5ustar iam23iam23surr_code/README0000644000175000017500000001175511462353402012344 0ustar iam23iam23Executive summary
-----------------
This distribution contains code to reproduce the results in:
Slice sampling covariance hyperparameters of latent Gaussian models.
Iain Murray and Ryan Prescott Adams.
Advances in Neural Information Processing Systems (NIPS) 23, 2010
http://books.nips.cc/
http://homepages.inf.ed.ac.uk/imurray2/pub/10hypers/
Much of the code here is detail required to run the experiments. The main
methods in this paper are implemented in update_theta_*.m and contain
documentation.
Some details
------------
This distribution contains the code that was used to produce the reported
results. We did not wish to change anything in case a "small change"
accidentally broke the code, or misrepresented what we did. As a result
there are some known imperfections, which we outline here. We also give
some details for which the paper had no room.
The function names don't match the names used in the paper. Here is the
correspondence:
---------------------------------------------
Paper's name Matlab function
---------------------------------------------
fixed update_theta_simple.m
prior-white update_theta_aux_chol.m
surr- update_theta_aux_surr.m
post- update_theta_aux_fixed.m
---------------------------------------------
When it gets down to the linear algebra, the surr- and post- methods are
very similar. There are various ways of computing (factorizations of) the
approximate-posterior covariance $R_\theta$, which vary slightly in
numerical stability and computation time. Unfortunately our implementations
use slightly different methods for the surr- and post- methods, for no real
reason. This is evident in the small timing differences on the Gaussian
problem, which we would expect to be more similar.
The surr-site, surr-taylor, post-site and post-taylor methods result from
calling the above functions with different auxiliary noise covariances. The
code refers to auxiliary noise covariance $S_\theta$ through diagonal
standard deviations "aux". (Generalizing to arbitrary covariances would be
trivial, but take more computer time.)
------------------------------------------------
Paper's name Likelihood Matlab aux function
------------------------------------------------
-site Logistic logistic_aux.m
Gaussian sqrt(diag(K))
Poisson poiss_aux.m
-taylor Logistic N/A
Gaussian sqrt(diag(K))
Poisson poiss_aux_fixed.m
------------------------------------------------
Inner-loop approximations: the logistic_aux uses a numerical approximation
to the precise moment matching. However the MCMC method is still valid. The
Laplace approximation to the Poisson site posterior involves the Lambert-W
function. The lambertw function in Matlab is *really* slow, so we started
to approximate further. We later provided our own lambertw routine, making
the straight-up Laplace approximation feasible. It turns out the code we
ran does something slightly different for zero counts. It would probably be
better to use the Laplace approximation throughout, but we have left in
what the reported results actually used. Future users should try the
commented out "Straight-up Laplace approx" in poiss_aux.m.
CARE: There are some rescaling issues to get right, which we dealt with in
the run_ and setup_ functions for each application. For example, the
logistic_aux function assumes there is a logistic likelihood with gain 1
and the GP has variable signal (marginal) variance. In the simulations we
use a fixed signal variance and variable gain, because this is equivalent
but computationally more efficient. However, we had to be careful to
convert between the representations correctly. Some rescaling issues arise
with the Cox process: does the GP function represent the log-intensity of
the process, or the log-rate for the Poisson distribution in each bin?
Sometimes one view is easier/faster and sometimes the other.
A final warning is that add_call_counter.m is inefficient and shouldn't be
used if you care about speed.
Distribution
------------
Code written by the authors of the paper is available under the standard
MIT license. However, various other files by other authors are included.
For full attributions and license terms see COPYING.
Dependencies:
-------------
For convenience we have included the datasets that we used and a copy of
(an old version of) the GPML toolbox with authors and distribution terms
given in its README file.
As it says in the paper, we ran this code on Matlab 7.8. Somewhat earlier
versions will work, but 7.x is probably required. It should all work in
recent versions of Octave too, although this hasn't been tested for a
while. Some of the code may currently depend on Un*x (Linux/Mac) in a
trivial way.
effective_size_rcoda.m depends on R and R-CODA being installed and in your
path. We don't have a Windows machine with R to get the system calls
working there.
surr_code/COPYING0000644000175000017500000000357011462353402012513 0ustar iam23iam23The code in this archive that is written by us may be used freely under the
standard, permissive MIT license, copied below. However, some of the code is
written by other authors and distributed under different terms.
The gpml subdirectory was not written by us, but is provided for convenience.
See its Copyright file for more details.
-------------------------------------------------------------------------------
The standard MIT License for update_theta_*.m and other code in this
distribution that was written by Iain Murray and/or Ryan P. Adams.
http://www.opensource.org/licenses/mit-license.php
-------------------------------------------------------------------------------
Copyright (c) 2010 Iain Murray, Ryan P. Adams
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
-------------------------------------------------------------------------------
surr_code/surr_code/0000755000175000017500000000000011462353403013441 5ustar iam23iam23surr_code/surr_code/plot_bars2.m0000644000175000017500000003763511415646336015714 0ustar iam23iam23clear;
fontsize = 10;
golden_ratio = (sqrt(5)-1.0)/2.0;
fig_width = 10;
fig_height = 2.7;
subplot_width = 2.9;
subplot_height = golden_ratio*subplot_width;
subplot_pos = [0 0.12 subplot_width/fig_width subplot_height/fig_height];
format_string = 'x%0.1e ';
expt_names = {'ionosphere', 'synthetic', 'mining', 'redwoods'};
%method_names = {'fixed', 'prior-white', 'surr-site', 'post-site', 'surr-taylor', 'post-taylor', 'surr-sigvar'};
method_names = {'fixed', 'prior-white', 'surr-site', 'post-site', 'surr-taylor', 'post-taylor'};
num_expts = length(expt_names);
num_runs = 10;
num_methods = length(method_names);
llh_calls = zeros([num_expts num_runs num_methods]);
cov_calls = zeros([num_expts num_runs num_methods]);
effcomp = zeros([num_expts num_runs num_methods]);
effcond = zeros([num_expts num_runs num_methods]);
elapsed = zeros([num_expts num_runs num_methods]);
%%%%%%%%% ionosphere %%%%%%%%%
expt_index = 1;
setup = setup_ionosphere();
runs = setup.runs;
ionosphere_simple = load('results/ionosphere_simple.mat');
ionosphere_chol = load('results/ionosphere_chol.mat');
ionosphere_surr_noise = load('results/ionosphere_surr_noise.mat');
ionosphere_fixed_noise = load('results/ionosphere_fixed_noise.mat');
%ionosphere_surr_gain = load('results/ionosphere_surr_gain.mat');
for run=1:runs
llh_calls(expt_index,run,1) = sum(ionosphere_simple.results(run).num_llh_calls);
llh_calls(expt_index,run,2) = sum(ionosphere_chol.results(run).num_llh_calls);
llh_calls(expt_index,run,3) = sum(ionosphere_surr_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,4) = sum(ionosphere_fixed_noise.results(run).num_llh_calls);
%llh_calls(expt_index,run,7) = sum(ionosphere_surr_gain.results(run).num_llh_calls);
cov_calls(expt_index,run,1) = sum(ionosphere_simple.results(run).num_cov_calls);
cov_calls(expt_index,run,2) = sum(ionosphere_chol.results(run).num_cov_calls);
cov_calls(expt_index,run,3) = sum(ionosphere_surr_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,4) = sum(ionosphere_fixed_noise.results(run).num_cov_calls);
%cov_calls(expt_index,run,7) = sum(ionosphere_surr_gain.results(run).num_cov_calls);
effcomp(expt_index,run,1) = ionosphere_simple.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,2) = ionosphere_chol.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,3) = ionosphere_surr_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,4) = ionosphere_fixed_noise.results(run).eff_comp_llh_samples;
%effcomp(expt_index,run,7) = ionosphere_surr_gain.results(run).eff_comp_llh_samples;
effcond(expt_index,run,1) = ionosphere_simple.results(run).eff_cond_llh_samples;
effcond(expt_index,run,2) = ionosphere_chol.results(run).eff_cond_llh_samples;
effcond(expt_index,run,3) = ionosphere_surr_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,4) = ionosphere_fixed_noise.results(run).eff_cond_llh_samples;
%effcond(expt_index,run,7) = ionosphere_surr_gain.results(run).eff_cond_llh_samples;
elapsed(expt_index,run,1) = ionosphere_simple.results(run).elapsed;
elapsed(expt_index,run,2) = ionosphere_chol.results(run).elapsed;
elapsed(expt_index,run,3) = ionosphere_surr_noise.results(run).elapsed;
elapsed(expt_index,run,4) = ionosphere_fixed_noise.results(run).elapsed;
%elapsed(expt_index,run,7) = ionosphere_surr_gain.results(run).elapsed;
end
%%%%%%%%% gaussian %%%%%%%%%
expt_index = 2;
setup = setup_gaussian();
runs = setup.runs;
gaussian_simple = load('results/gaussian_simple.mat');
gaussian_chol = load('results/gaussian_chol.mat');
gaussian_surr_noise = load('results/gaussian_surr_noise.mat');
gaussian_fixed_taylor = load('results/gaussian_fixed_taylor.mat');
for run=1:runs
llh_calls(expt_index,run,1) = sum(gaussian_simple.results(run).num_llh_calls);
llh_calls(expt_index,run,2) = sum(gaussian_chol.results(run).num_llh_calls);
llh_calls(expt_index,run,3) = sum(gaussian_surr_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,4) = sum(gaussian_fixed_taylor.results(run).num_llh_calls);
llh_calls(expt_index,run,5) = sum(gaussian_surr_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,6) = sum(gaussian_fixed_taylor.results(run).num_llh_calls);
cov_calls(expt_index,run,1) = sum(gaussian_simple.results(run).num_cov_calls);
cov_calls(expt_index,run,2) = sum(gaussian_chol.results(run).num_cov_calls);
cov_calls(expt_index,run,3) = sum(gaussian_surr_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,4) = sum(gaussian_fixed_taylor.results(run).num_cov_calls);
cov_calls(expt_index,run,5) = sum(gaussian_surr_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,6) = sum(gaussian_fixed_taylor.results(run).num_cov_calls);
effcomp(expt_index,run,1) = gaussian_simple.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,2) = gaussian_chol.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,3) = gaussian_surr_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,4) = gaussian_fixed_taylor.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,5) = gaussian_surr_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,6) = gaussian_fixed_taylor.results(run).eff_comp_llh_samples;
effcond(expt_index,run,1) = gaussian_simple.results(run).eff_cond_llh_samples;
effcond(expt_index,run,2) = gaussian_chol.results(run).eff_cond_llh_samples;
effcond(expt_index,run,3) = gaussian_surr_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,4) = gaussian_fixed_taylor.results(run).eff_cond_llh_samples;
effcond(expt_index,run,5) = gaussian_surr_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,6) = gaussian_fixed_taylor.results(run).eff_cond_llh_samples;
elapsed(expt_index,run,1) = gaussian_simple.results(run).elapsed;
elapsed(expt_index,run,2) = gaussian_chol.results(run).elapsed;
elapsed(expt_index,run,3) = gaussian_surr_noise.results(run).elapsed;
elapsed(expt_index,run,4) = gaussian_fixed_taylor.results(run).elapsed;
elapsed(expt_index,run,5) = gaussian_surr_noise.results(run).elapsed;
elapsed(expt_index,run,6) = gaussian_fixed_taylor.results(run).elapsed;
end
%%%%%%%%% mine %%%%%%%%%
expt_index = 3;
setup = setup_mine();
runs = setup.runs;
mine_simple = load('results/mine_simple.mat');
mine_chol = load('results/mine_chol.mat');
mine_surr_noise = load('results/mine_surr_noise.mat');
mine_fixed_noise = load('results/mine_fixed_noise.mat');
mine_surr_taylor = load('results/mine_surr_taylor.mat');
mine_fixed_taylor = load('results/mine_fixed_taylor.mat');
for run=1:runs
llh_calls(expt_index,run,1) = sum(mine_simple.results(run).num_llh_calls);
llh_calls(expt_index,run,2) = sum(mine_chol.results(run).num_llh_calls);
llh_calls(expt_index,run,3) = sum(mine_surr_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,4) = sum(mine_fixed_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,5) = sum(mine_surr_taylor.results(run).num_llh_calls);
llh_calls(expt_index,run,6) = sum(mine_fixed_taylor.results(run).num_llh_calls);
cov_calls(expt_index,run,1) = sum(mine_simple.results(run).num_cov_calls);
cov_calls(expt_index,run,2) = sum(mine_chol.results(run).num_cov_calls);
cov_calls(expt_index,run,3) = sum(mine_surr_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,4) = sum(mine_fixed_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,5) = sum(mine_surr_taylor.results(run).num_cov_calls);
cov_calls(expt_index,run,6) = sum(mine_fixed_taylor.results(run).num_cov_calls);
effcomp(expt_index,run,1) = mine_simple.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,2) = mine_chol.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,3) = mine_surr_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,4) = mine_fixed_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,5) = mine_surr_taylor.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,6) = mine_fixed_taylor.results(run).eff_comp_llh_samples;
effcond(expt_index,run,1) = mine_simple.results(run).eff_cond_llh_samples;
effcond(expt_index,run,2) = mine_chol.results(run).eff_cond_llh_samples;
effcond(expt_index,run,3) = mine_surr_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,4) = mine_fixed_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,5) = mine_surr_taylor.results(run).eff_cond_llh_samples;
effcond(expt_index,run,6) = mine_fixed_taylor.results(run).eff_cond_llh_samples;
elapsed(expt_index,run,1) = mine_simple.results(run).elapsed;
elapsed(expt_index,run,2) = mine_chol.results(run).elapsed;
elapsed(expt_index,run,3) = mine_surr_noise.results(run).elapsed;
elapsed(expt_index,run,4) = mine_fixed_noise.results(run).elapsed;
elapsed(expt_index,run,5) = mine_surr_taylor.results(run).elapsed;
elapsed(expt_index,run,6) = mine_fixed_taylor.results(run).elapsed;
end
%%%%%%%%% redwood %%%%%%%%%
expt_index = 4;
setup = setup_redwood();
runs = setup.runs;
redwood_simple = load('results/redwood_simple.mat');
redwood_chol = load('results/redwood_chol.mat');
redwood_surr_noise = load('results/redwood_surr_noise.mat');
redwood_fixed_noise = load('results/redwood_fixed_noise.mat');
redwood_surr_taylor = load('results/redwood_surr_taylor.mat');
redwood_fixed_taylor = load('results/redwood_fixed_taylor.mat');
for run=1:runs
llh_calls(expt_index,run,1) = sum(redwood_simple.results(run).num_llh_calls);
llh_calls(expt_index,run,2) = sum(redwood_chol.results(run).num_llh_calls);
llh_calls(expt_index,run,3) = sum(redwood_surr_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,4) = sum(redwood_fixed_noise.results(run).num_llh_calls);
llh_calls(expt_index,run,5) = sum(redwood_surr_taylor.results(run).num_llh_calls);
llh_calls(expt_index,run,6) = sum(redwood_fixed_taylor.results(run).num_llh_calls);
cov_calls(expt_index,run,1) = sum(redwood_simple.results(run).num_cov_calls);
cov_calls(expt_index,run,2) = sum(redwood_chol.results(run).num_cov_calls);
cov_calls(expt_index,run,3) = sum(redwood_surr_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,4) = sum(redwood_fixed_noise.results(run).num_cov_calls);
cov_calls(expt_index,run,5) = sum(redwood_surr_taylor.results(run).num_cov_calls);
cov_calls(expt_index,run,6) = sum(redwood_fixed_taylor.results(run).num_cov_calls);
effcomp(expt_index,run,1) = redwood_simple.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,2) = redwood_chol.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,3) = redwood_surr_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,4) = redwood_fixed_noise.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,5) = redwood_surr_taylor.results(run).eff_comp_llh_samples;
effcomp(expt_index,run,6) = redwood_fixed_taylor.results(run).eff_comp_llh_samples;
effcond(expt_index,run,1) = redwood_simple.results(run).eff_cond_llh_samples;
effcond(expt_index,run,2) = redwood_chol.results(run).eff_cond_llh_samples;
effcond(expt_index,run,3) = redwood_surr_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,4) = redwood_fixed_noise.results(run).eff_cond_llh_samples;
effcond(expt_index,run,5) = redwood_surr_taylor.results(run).eff_cond_llh_samples;
effcond(expt_index,run,6) = redwood_fixed_taylor.results(run).eff_cond_llh_samples;
elapsed(expt_index,run,1) = redwood_simple.results(run).elapsed;
elapsed(expt_index,run,2) = redwood_chol.results(run).elapsed;
elapsed(expt_index,run,3) = redwood_surr_noise.results(run).elapsed;
elapsed(expt_index,run,4) = redwood_fixed_noise.results(run).elapsed;
elapsed(expt_index,run,5) = redwood_surr_taylor.results(run).elapsed;
elapsed(expt_index,run,6) = redwood_fixed_taylor.results(run).elapsed;
end
%%%%%%%%%% compllh per llh %%%%%%%%%%%
set(0, 'DefaultTextInterpreter', 'tex', ...
'DefaultTextFontName', 'Helvetica', ...
'DefaultTextFontSize', fontsize, ...
'DefaultAxesFontName', 'Helvetica', ...
'DefaultAxesFontSize', fontsize);
figure('Units', 'inches', ...
'Position', [0 0 fig_width fig_height], ...
'PaperPositionMode', 'auto');
offx = 0.06;
subplot('Position', subplot_pos + [offx 0 0 0]);
measure = effcomp ./ llh_calls;
means = squeeze(mean(measure,2));
stds = sqrt(squeeze(mean(measure.^2,2) - mean(measure,2).^2));
scales = repmat(means(:,3), [1 num_methods]);
b = bar([1:num_expts], means./scales);
c = get(b, 'Children');
xdata = zeros([num_methods*4 1]);
ydata = zeros([num_methods*4 1]);
idx = 1;
for i = 1:length(c)
xdata_i = mean(get(c{i}, 'xdata'));
tmp_y = get(c{i}, 'ydata');
ydata_i = mean(tmp_y(2:3,:));
for j = 1:4
xdata(idx) = xdata_i(j);
ydata(idx) = ydata_i(j);
idx = idx + 1;
end
end
hold on;
errorbar(xdata, ydata, stds(:)./(sqrt(num_runs)*scales(:)), 'k.');
hold off;
set(gca, 'XTickLabel', expt_names);
set(gca, 'Box', 'off');
xlim([0.5 4.5]);
%ylim([0 2]);
%ylabel('Effective samples');
title('Effective samples per likelihood evaluation');
for j = 1:4
[figx figy] = dsxy2figxy(j, 0);
ann = annotation('textbox', [figx 0.01 0.01 0.01], ...
'FitBoxToText', 'on', ...
'LineStyle', 'none', ...
'HorizontalAlignment', 'center', ...
'FontSize', fontsize, ...
'VerticalAlignment', 'baseline', ...
'Margin', 0, ...
'String', sprintf(format_string, scales(j,1)));
end
offx = offx + 0.32;
subplot('Position', subplot_pos + [offx 0 0 0]);
measure = effcomp ./ cov_calls;
means = squeeze(mean(measure,2));
stds = sqrt(squeeze(mean(measure.^2,2) - mean(measure,2).^2));
scales = repmat(means(:,3), [1 num_methods]);
b = bar([1:num_expts], means./scales);
c = get(b, 'Children');
xdata = zeros([num_methods*4 1]);
ydata = zeros([num_methods*4 1]);
idx = 1;
for i = 1:length(c)
xdata_i = mean(get(c{i}, 'xdata'));
tmp_y = get(c{i}, 'ydata');
ydata_i = mean(tmp_y(2:3,:));
for j = 1:4
xdata(idx) = xdata_i(j);
ydata(idx) = ydata_i(j);
idx = idx + 1;
end
end
hold on;
errorbar(xdata, ydata, stds(:)./(sqrt(num_runs)*scales(:)), 'k.');
hold off;
set(gca, 'XTickLabel', expt_names);
set(gca, 'Box', 'off');
xlim([0.5 4.5]);
%ylim([0 2]);
title('Effective samples per covariance construction');
for j = 1:4
[figx figy] = dsxy2figxy(j, 0);
ann = annotation('textbox', [figx 0.01 0.01 0.01], ...
'FitBoxToText', 'on', ...
'LineStyle', 'none', ...
'HorizontalAlignment', 'center', ...
'FontSize', fontsize, ...
'VerticalAlignment', 'baseline', ...
'Margin', 0, ...
'String', sprintf(format_string, scales(j,1)));
end
hl = legend(method_names, 'location', 'NorthOutside', 'orientation', 'horizontal');
set(hl, 'position', [0.2 0.9, 0.6 0.1]);
offx = offx + 0.32;
subplot('Position', subplot_pos + [offx 0 0 0]);
measure = effcomp ./ elapsed;
means = squeeze(mean(measure,2));
stds = sqrt(squeeze(mean(measure.^2,2) - mean(measure,2).^2));
scales = repmat(means(:,3), [1 num_methods]);
b = bar([1:num_expts], means./scales);
c = get(b, 'Children');
xdata = zeros([num_methods*4 1]);
ydata = zeros([num_methods*4 1]);
idx = 1;
for i = 1:length(c)
xdata_i = mean(get(c{i}, 'xdata'));
tmp_y = get(c{i}, 'ydata');
ydata_i = mean(tmp_y(2:3,:));
for j = 1:4
xdata(idx) = xdata_i(j);
ydata(idx) = ydata_i(j);
idx = idx + 1;
end
end
hold on;
errorbar(xdata, ydata, stds(:)./(sqrt(num_runs)*scales(:)), 'k.');
hold off;
set(gca, 'XTickLabel', expt_names);
set(gca, 'Box', 'off');
xlim([0.5 4.5]);
%ylim([0 2]);
title('Effective samples per second');
for j = 1:4
[figx figy] = dsxy2figxy(j, 0);
ann = annotation('textbox', [figx 0.01 0.01 0.01], ...
'FitBoxToText', 'on', ...
'LineStyle', 'none', ...
'HorizontalAlignment', 'center', ...
'FontSize', fontsize, ...
'VerticalAlignment', 'baseline', ...
'Margin', 0, ...
'String', sprintf(format_string, scales(j,1)));
end
print(gcf, 'plots/effsamp_bars_all.eps', '-depsc');
close;
surr_code/surr_code/poiss_aux_fixed.m0000644000175000017500000000013511415646336017017 0ustar iam23iam23function [aux_std, gg] = poiss_aux_fixed(counts)
gg = log(max(counts, 1));
aux_std = 1./gg;
surr_code/surr_code/run_ionosphere_surr_noise.m0000644000175000017500000000512111415646336021135 0ustar iam23iam23function run_ionosphere_surr_noise()
addpath('gpml');
experiment_setup()
setup = setup_ionosphere();
name = 'ionosphere_surr_noise';
fn = @(run) ionosphere_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = ionosphere_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(train_x);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
cur_llh = counting_llh(ff, gain);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
if ii > 0
fprintf('%03d/%03d] Iter %05d / %05d Train Error: %0.2f \n', run, runs, ...
ii, iterations, train_error_fn(mean(ff_samples(1:ii,:),1)'));
else
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
end
[theta ff aux chol_cov] = update_theta_aux_surr(theta, ff, @(x) counting_llh(x, gain), ...
counting_cov, ...
@(theta,K) aux_noise_fn(theta, K, gain), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain));
end
[gain cur_llh] = update_gain(gain, ff, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_mine_fixed_taylor.m0000644000175000017500000000526611415646336020225 0ustar iam23iam23function run_mine_fixed_taylor()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_fixed_taylor';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff] = update_theta_aux_fixed(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta,K) aux_taylor_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
chol_cov = chol(counting_cov(theta));
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_gaussian_chol.m0000644000175000017500000000417511415646336017341 0ustar iam23iam23function run_gaussian_chol()
addpath('gpml');
experiment_setup()
setup = setup_gaussian();
name = 'gaussian_chol';
fn = @(run) gaussian_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = gaussian_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
cur_llh = counting_llh(ff);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff chol_cov] = update_theta_aux_chol(theta, ff, @(x) counting_llh(x), ...
counting_cov, theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x));
end
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) ...
- sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_redwood_fixed_taylor.m0000644000175000017500000000526311415646336020735 0ustar iam23iam23function run_redwood_fixed_taylor()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_fixed_taylor';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_fixed(theta, ff, ...
@(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta, K) aux_taylor_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/UNPACK_STRUCT.m0000644000175000017500000000146111415646336015716 0ustar iam23iam23function UNPACK_STRUCT(strct, warn)
%UNPACK_STRUCT make all fields of the structure separate variables in the calling workspace
%
% UNPACK_STRUCT(strct[, warn])
%
% All of the fields of strct will now exist as top-level variables in the
% calling workspace. This may remove a lot of 'strct.' clutter from code.
%
% Unless the warn option is given and set to 'false', the code will emit
% warnings if pre-existing variables are being over-written.
% Iain Murray, October 2009
if ~exist('warn', 'var')
warn = true;
end
args = fieldnames(strct);
for ff = args(:)'
field = ff{1};
if warn && evalin('caller', ['exist(''', field, ''', ''var'')'])
warning(['Over-writing variable ''', field, '''. Set warn=false if this was intended']);
end
assignin('caller', field, strct.(field));
end
surr_code/surr_code/slice_sweep.m0000644000175000017500000000667411415646336016146 0ustar iam23iam23function particle = slice_sweep(particle, slice_fn, sigma, step_out)
%SLICE_SWEEP one set of axis-aligned slice-sampling updates of particle.pos
%
% particle = slice_sweep(particle, slice_fn[, sigma[, step_out]])
%
% The particle position is updated with a standard univariate slice-sampler.
% Stepping out is linear (if step_out is true), but shrinkage is exponential. A
% sensible strategy is to set sigma conservatively large and turn step_out off.
% If it's hard to set a good sigma though, you should leave step_out=true.
%
% Inputs:
% particle sct Structure contains:
% .pos - initial position on slice as Dx1 vector
% (or any array)
% .Lpstar - log probability of .pos (up to a constant)
% .on_slice - needn't be set initially but is set
% during slice sampling. Particle must enter
% and leave this routine "on the slice".
% slice_fn @fn particle = slice_fn(particle, Lpstar_min)
% If particle.on_slice then particle.Lpstar should be
% correct, otherwise its value is arbitrary.
% sigma (D|1)x1 step size parameter(s) (default=1)
% step_out 1x1 if non-zero, do stepping out procedure (default), else
% only step in (saves on fn evals, but takes smaller steps)
%
% Outputs:
% particle sct particle.pos and .Lpstar are updated.
% Originally based on pseudo-code in David MacKay's text book p375
% Iain Murray, May 2004, January 2007, June 2008, January 2009
if nargin < 3; sigma = 1; end
if nargin < 4; step_out = 1; end
DD = numel(particle.pos);
if length(sigma) == 1
sigma = repmat(sigma, DD, 1);
end
%for dd = 1:DD
% A random order is more robust generally and important inside
% algorithms like nested sampling and AIS
for dd = randperm(DD)
Lpstar_min = particle.Lpstar + log(rand);
% Create a horizontal interval (x_l, x_r) enclosing x_cur
x_cur = particle.pos(dd);
rr = rand;
x_l = x_cur - rr*sigma(dd);
x_r = x_cur + (1-rr)*sigma(dd);
if step_out
particle.pos(dd) = x_l;
while 1
particle = slice_fn(particle, Lpstar_min);
if ~particle.on_slice
break
end
particle.pos(dd) = particle.pos(dd) - sigma(dd);
end
x_l = particle.pos(dd);
particle.pos(dd) = x_r;
while 1
particle = slice_fn(particle, Lpstar_min);
if ~particle.on_slice
break
end
particle.pos(dd) = particle.pos(dd) + sigma(dd);
end
x_r = particle.pos(dd);
end
% Make proposals and shrink interval until acceptable point found
% One should only get stuck in this loop forever on badly behaved problems,
% which should probably be reformulated.
chk = 0;
while 1
particle.pos(dd) = rand*(x_r - x_l) + x_l;
particle = slice_fn(particle, Lpstar_min);
if particle.on_slice
break % Only way to leave the while loop.
else
% Shrink in
if particle.pos(dd) > x_cur
x_r = particle.pos(dd);
elseif particle.pos(dd) < x_cur
x_l = particle.pos(dd);
else
error('BUG DETECTED: Shrunk to current position and still not acceptable.');
end
end
end
end
surr_code/surr_code/effective_size_rcoda.m0000644000175000017500000000603211415646336017772 0ustar iam23iam23function sz = effective_size_rcoda(samples)
%EFFECTIVE_SIZE_RCODA estimate effective sample sizes of each column (calls R-CODA)
%
% sz = effective_size_rcoda(samples)
%
% This function simply shells R and runs the effectiveSize routine in R-CODA.
%
% Inputs:
% samples DxN each column is processed in isolation
%
% Outputs:
% sz 1xN estimated effective sample size for each column
% It's probably a good idea to have a native Matlab version of this routine.
% But then I would need this routine anyway to test it.
% Iain Murray, September 2009.
% Bugfixed N>8 February 2010.
[D, N] = size(samples);
if (D == 1)
error('Cannot process "time series" of length 1.');
end
% HACK: This routine only works for small N, because the output file parsing
% below fails if R puts the answers on more than one line. Rather than improve
% the parsing, I quickly bodged in the following fix:
max_N = 5; % (max_N == 8) seems to be ok, but playing safe.
if N > max_N
sz = [effective_size_rcoda(samples(:, 1:max_N)), ...
effective_size_rcoda(samples(:, (max_N+1):end))];
return
end
% The user should put the version of R that they want to use in their system's
% PATH, but if that fails I will try looking in other places too.
Rlocs = {'', '/opt/local/bin/'};
for loc = Rlocs
[status, output] = system([loc{:}, 'R --version']);
if status == 0
R_cmd = [loc{:}, 'R --vanilla CMD BATCH '];
break;
end
end
if ~exist('R_cmd')
error('R executable not found');
end
% TODO cross-platform support for the following is untested. On my linux machine
% I need a clean environment or R fails. I do this with 'env -i'. I don't
% immediately know how to do this on other platforms, but maybe it isn't needed.
if exist('/usr/bin/env', 'file')
R_cmd = ['/usr/bin/env -i ', R_cmd];
end
% Set up location for temporary files (in RAM if possible)
dirnm = ['esz', sprintf('%d', floor(rand*100000))];
if exist('/dev/shm', 'dir')
% Linux
tmp_dir = ['/dev/shm/', dirnm, '/'];
else
% Should work on all platforms
tmp_dir = [tempdir(), dirnm, filesep()];
end
success = mkdir(tmp_dir);
assert(success);
% Write samples out to temporary file
save([tmp_dir, 'samples'], 'samples', '-ascii');
% Run R-CODA's routine on that file
R_program = ['library(coda)\n',...
'mcmcread=read.table("samples")\n',...
'mcmcrun=cbind(mcmcread)\n',...
'mcmcobj=mcmc(mcmcrun)\n',...
'effectiveSize(mcmcobj)\n'];
R_file = [tmp_dir, 'prog.r'];
fid = fopen(R_file, 'w');
fprintf(fid, R_program);
fclose(fid);
out_file = [tmp_dir, 'tmpRout'];
opwd = pwd();
cd(tmp_dir);
system_cmd = [R_cmd, R_file, ' ', out_file];
status = system(system_cmd);
cd(opwd);
assert(~status);
% grab result from output
fid = fopen(out_file);
output = fread(fid);
output = ['a' output(:)'];
fclose(fid);
snippet = output(strfind(output, 'V1'):end);
% FIXME may be Unix specific because of line terminating issues?
idx = strfind(snippet, sprintf('\n'));
snippet = snippet(idx(1)+1:idx(2)-1);
sz = sscanf(snippet, '%f')';
% Delete temporary directory
rmdir(tmp_dir, 's');
surr_code/surr_code/run_gaussian_fixed_taylor.m0000644000175000017500000000424111415646336021077 0ustar iam23iam23function run_gaussian_fixed_taylor()
addpath('gpml');
experiment_setup()
setup = setup_gaussian();
name = 'gaussian_fixed_taylor';
fn = @(run) gaussian_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = gaussian_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
cur_llh = counting_llh(ff);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_fixed(theta, ff, @(x) counting_llh(x), ...
counting_cov, aux_noise_fn, theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x));
end
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) ...
- sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/lambertw_approx.m0000644000175000017500000000075411415646336017043 0ustar iam23iam23function w = lambertw_approx(x)
% See Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J.; Knuth, D.
% E. (1996). "On the Lambert W function". Advances in Computational Mathematics
% 5: 329–359. doi:10.1007/BF02124750
%
% ...or Wikipedia which is where I really got this from(!)
w = log(1 + x); % my arbitrary initialization.
% Halley's method updates
iters = 2;
for ii = 1:iters
expw = exp(w);
w = w - (w.*expw - x) ./ (expw.*(w+1) - (w+2).*(w.*expw-x)./(2*w+2));
end
surr_code/surr_code/run_redwood_surr_taylor.m0000644000175000017500000000526211415646336020630 0ustar iam23iam23function run_redwood_surr_taylor()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_surr_taylor';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_surr(theta, ff, ...
@(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta, K) aux_taylor_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/set_aux_noise_std.m0000644000175000017500000000360011415646336017345 0ustar iam23iam23function aux_noise_std = set_aux_noise_std(Lelement_fn, prior_std, K)
%SET_AUX_NOISE_STD compute a diagonal element of S auxiliary variance matrix
%
% aux_noise_std = set_aux_noise_std(Lelement_fn, prior_std)
%
% Inputs:
% Lelement_fn @fn takes an array of possible values for f_n and computes
% the log-likelihood, log(p(data|f_n)), of each setting.
% prior_std 1x1 marginal prior std-dev of f_n, sqrt(K(n,n))
%
% Outputs:
% aux_noise_std 1x1 recommended value for S(n,n)
% Iain Murray, November 2009, January 2010
% I did try using fancier quadrature. see:
% set_aux_noise_std_quad
% for my attempt. It is is fiddly numerically (need to first identify size of a
% large likelihood value to subtract off to avoid overflow), Matlab's quadrature
% routines are either not robust, or slower than the more straightforward code
% here. Although the code here will fail on very sharply peaked likelihoods,
% a simple diagnostic is performed to notice this, but more code could be
% written to deal with it.
prior_var = prior_std*prior_std;
prior_precision = 1/prior_var;
if nargin < 3
K = 100;
end
hh = 1/K;
persistent grid;
if length(grid) ~= K
grid = 8*(hh/2:hh:(1-hh/2)) - 4;
end
% Numerically find variance of marginal posterior given a single observation
f = grid * prior_std;
Lpoststar = Lelement_fn(f) - 0.5*prior_precision*f.*f;
post = exp(Lpoststar - logsumexp(Lpoststar(:)));
if max(post) > 0.5
warning(sprintf(['Posterior sharply peaked compared to grid.\n' ...
'TODO write code here to refine numerical approximation.']));
end
post_mu = f * post(:);
post_var = (f - post_mu).^2 * post(:);
% Compute width of Gaussian likelihood that would give same posterior variance
post_precision = 1/post_var;
if post_precision <= prior_precision
aux_noise_std = Inf;
else
aux_noise_std = sqrt(1 / (post_precision - prior_precision));
end
surr_code/surr_code/run_mine_fixed_noise.m0000644000175000017500000000522711415646336020025 0ustar iam23iam23function run_mine_fixed_noise()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_fixed_noise';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_fixed(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta,K) aux_noise_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_ionosphere_chol.m0000644000175000017500000000504711415646336017701 0ustar iam23iam23function run_ionosphere_chol()
addpath('gpml');
experiment_setup()
setup = setup_ionosphere();
name = 'ionosphere_chol';
fn = @(run) ionosphere_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = ionosphere_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(train_x);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
cur_llh = counting_llh(ff, gain);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
if ii > 0
fprintf('%03d/%03d] Iter %05d / %05d Train Error: %0.2f \n', run, runs, ...
ii, iterations, train_error_fn(mean(ff_samples(1:ii,:),1)'));
else
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
end
[theta ff chol_cov] = update_theta_aux_chol(theta, ff, @(x) counting_llh(x, gain), ...
counting_cov, theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain));
end
[gain cur_llh] = update_gain(gain, ff, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/experiment_setup.m0000644000175000017500000000026011415646336017225 0ustar iam23iam23function experiment_setup()
% Matlab setup needed for all experiments
addpath(genpath('experiment_toolbox'));
try % Stop errors in older Matlabs
maxNumCompThreads(1);
end
surr_code/surr_code/data/0000755000175000017500000000000011415646745014366 5ustar iam23iam23surr_code/surr_code/data/local_pred.m0000640000175000017500000000323011361630025016622 0ustar iam23iam23function local_pred()
K = 10; % Cross-validation experiment with naive median predictor
num_runs = 30;
[xx,yy] = read_forestfires();
N = size(xx, 2);
scores = zeros(num_runs, 1);
for run = 1:num_runs
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
y_batches = make_batched(yy', K);
x_batches = make_batched(xx, K);
for bb = 1:K
train_x = cell2mat(x_batches([1:bb-1,bb+1:K]));
train_y = cell2mat(y_batches([1:bb-1,bb+1:K]))';
test_x = x_batches{bb};
test_y = y_batches{bb}';
pred = blah(train_x, train_y, test_x);
scores(run) = scores(run) + mean(abs(test_y-pred));
end
scores(run) = scores(run) / K;
end
disp(errorbar_str(scores));
% Not significantly better than naive predictor.
%
% >> naive_pred
% 12.8372 +/- 0.0046
%
% SVM got 12.71 +/- 0.01
%
% Cheating with naive_pred gets:
% >> mean(abs(yy-median(yy)))
% ans = 12.83
function pred = blah(xx, yy, test_x)
fraction_to_use = 1/6;
xx = xx([9,10,11,12], :);
xx = [xx; ones(1, size(xx,2))];
test_x = test_x([9,10,11,12], :);
test_x = [test_x; ones(1, size(test_x,2))];
%Lyy = log(1+yy);
Lyy = log(yy);
% Linear regression on just non-zero outputs, to get some direction for
% visualization:
idx = ~(yy==min(yy));
xx2 = xx(:,idx);
yy2 = Lyy(idx);
%ww = xx2'\yy2;
%ww = xx'\Lyy;
ww = xx'\(Lyy>min(Lyy)); % Set direction just to separate fire vs non-fire
test_x = (test_x'*ww)';
xx = (xx'*ww)';
M = length(test_x);
N = length(yy);
% As this doesn't make any difference, it's all bunk!
[dummy, idx] = sort(square_dist(test_x, xx), 2); % MxN
pred = zeros(M, 1);
for mm = 1:M
pred(mm) = median(yy(idx(mm, 1:floor(N*fraction_to_use))));
end
surr_code/surr_code/data/naive_pred.m0000640000175000017500000000120411361630030016625 0ustar iam23iam23K = 10; % Cross-validation experiment with naive median predictor
num_runs = 30;
[xx,yy] = read_forestfires();
N = size(xx, 2);
scores = zeros(num_runs, 1);
for run = 1:num_runs
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
batches = make_batched(yy', K);
for bb = 1:K
pred = median(cell2mat(batches([1:bb-1,bb+1:K])));
scores(run) = scores(run) + mean(abs(batches{bb}-pred));
end
scores(run) = scores(run) / K;
end
disp(errorbar_str(scores));
% >> naive_pred
% 12.8372 +/- 0.0046
%
% SVM got 12.71 +/- 0.01
%
% Cheating with naive_pred gets:
% >> mean(abs(yy-median(yy)))
% ans = 12.83
surr_code/surr_code/data/naive_bootstrap.m0000640000175000017500000000206111363414172017723 0ustar iam23iam23K = 10; % Cross-validation experiment with naive median predictor
num_runs = 30;
num_trials = 100;
trial_scores = zeros(num_trials, 1);
for tt = 1:num_trials
fprintf('trial %d / %d\r', tt, num_trials);
[xx,yy] = read_forestfires();
N = size(xx, 2);
% Bootstrap resample:
idx = ceil(rand(N, 1)*N);
xx = xx(:, idx);
yy = yy(idx);
scores = zeros(num_runs, 1);
for run = 1:num_runs
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
batches = make_batched(yy', K);
for bb = 1:K
pred = median(cell2mat(batches([1:bb-1,bb+1:K])));
scores(run) = scores(run) + mean(abs(batches{bb}-pred));
end
scores(run) = scores(run) / K;
end
trial_scores(tt) = mean(scores);
end
fprintf('\n');
disp(errorbar_str(mean(trial_scores), std(trial_scores)));
disp(errorbar_str(trial_scores));
hist(trial_scores);
% >> naive_pred
% 12.8372 +/- 0.0046
%
% SVM got 12.71 +/- 0.01
%
% Cheating with naive_pred gets:
% >> mean(abs(yy-median(yy)))
% ans = 12.83
surr_code/surr_code/data/synthetic.mat0000644000175000017500000004125111415646336017102 0ustar iam23iam23MATLAB 5.0 MAT-file, Platform: GLNX86, Created on: Thu Jun 3 11:12:14 2010 IM8xc``b6
3"a> ,N/K,LKNe`㺾k=8 Ax\w4o?̔RBv[2*--)HBĻn,Ѧ9}^__o/~G;O;ϙ3<<_[Fn-s3DuߢߦV燝:=
# ,s9Mc,RL=n1Ul8,M>piUc5(ZÑC̷YqLsƒ^tڨo؈ji9xkk
?3>N-]
㝯hlףFQ$s)Q-}ح+1V[2^_Ps|c_0L_|X_}1@h}oӡA,w˓xNGgSQQi":2>PѾ-O+̯qx2lM#rRIgs/eIlfhk7ŕP[NRqPHg7_c,Ks
?Ju6}Ζ>شP81Ę!!JXhՊis!4*,;\cX߅?J:q}qmz]цglmJiJV/k엍9=˘d(xF#w&aunɅ8{hԛ덻8pGoa3}@We*}U^Q=k
gSG*asTLɜؓk='˗1|ǈBvퟘVXJ@lVh֑W9d|MI{
v_0Wf.P|.fQzhF0OH Iz6Ӑ{-HA* ?nX9i9F-|
n~?Pud i8[I.g1x33bՃ`&
o;i)-
F[&4~[opt_̆y|}f*wlm#CKO%Óq֠wb2~Pݛl%/i}]A(j_O0D<ծRҕHYs؍6{< R
iQ7Ոvt瘰{:I!CsGv뇧Lj9Me9>XH7 ۿK
AS
rc6Tヷ4#(.|[q?+vqGDZ'.?@C*
a2Szq:
'{:Aos@`G-
*3;J
ˬ=8ٷ篎?H8X3>]ƷL)88u~./ͬa")3
_u@꒿H`@QTZFvf<<9pj
ٔ5#p*yGI<[r'7B
K u{L@+/"`X{O?vrsֻUmyo{Η 2qВ
lud_̃iX$$NZ̠?aj):6/|u @eQL|4^oR2txI&QPvᅅO#/]B&^<
N8m5< E>[އOQ4I#W_l+%?ƃOP-#a1ؚn;k1ڲ$ܘOIU"(YY1i<ѳ6SGOaC.z_
F) h&Wquܥ루`/%a^O6VyO!j9CCyO/O a.j0ly%gy_[o
E QRF=JHh`? _qҙ$L5t@u-S0]NkTIl>zb=velȬC\!YZVЃ☳JNC*tVH7&0I_"G{LcGBi.
t}½qQQ|qQ^61UrKE滩^!أ4{P]Y݉]Rqt3vv> kbbW14O8%([էga2#!kD*,TǤ$ ^製rژWZ5@P*滊K?!w5¥
ϟo}6Y6⽨Pz4VX\kjưUw=//.#~E%^sûGNN:ڵ֏|OC`!ܚQc=LSs}ͧqQn-)*S[)<+G$2i#S3I7tx~ʱ- fQDlQFnl
cՖLԩ-[,GmbSv}a3*`۬ qĐ_BWJ2[{Yr~O+(8wh5V% P6eA^/4T.%%=Ըs)([ȑ"W&ox^}tu
SIHRƈ.{:`Q:
xvBDf:fMʯvR`rB{iL/ϑ`1}!C[oT&p}܅:D\3Yf6uRus
v_cͧur$B4AIg:l>+^ӃLZ,*+%J(jUQ.]~)
<`gJ!*M:c/wW9̈8GQ[f{olABsi[.ݏH|ԛG9/~rm
]fyO7WĹB
fi}VHn]3
+wQzRt)Ո7a"$DMPȹc[KE4({QπւE冏=;yTh#, :gq[Hae^j|,[: a=;9zp|f:*e/ߍJ-kf$ƉB+o(?uxGsAm}Hv9F17j'"ށoWyTWZ*R\rgtlX1C?~kX'6jN9]|rnW:_+:F"MEO&:1rЊo 镖[@ٞQHXvrk<+IJ>9kTEeߝ_Z<1*S{MGfxF$6ޜW)XwG俘r
Y ^[Ҽ=ȹaEBg!]0hB`ȡx&5'֚{ܻ2o@jJqf/#~p}}k%6".lڤKRB r>z ?y1"?o.`TnՋR9qBlߑaA=cG̭}yq3Byq߬ēX@G;'UaC+*V=YmOkU
p+'?6ⱖ|QtvψS7Pzĉ|Lp/o}>T8i3m0&QYIdBn8}w_"0)P:Or:ADاkA*FMnaI6t !Ng9lߧtYSZ|=rP9?!y@'^
_:gm(uS=Ts_zƞ+3c$
[+ү1ilpgGo3&nnÐ$8"d(m.a \rixbK6^'^O[H?txjk=EdurZHғz)Swwg3aK7 "㲶:|AhnInݵ
DdϩΊOAw2Es4Xx+sgPSM|8.k&ϒ)T.|Ķ<ʽvd T6 cƢaSl R\p\U*5uذcGGriNzW%N{%#P)c|Dۧ9k!_&:ĴW5Q~5*;D&?gLH2N{"^.Sr>1\Glmޔ8=BFSحp
s쨴g=)jpHAӈy/(QVBrS._R*pU~'<~rt+Ixu/eqW4N֮,1hET۪Nnߏ{$MW`]>AFkw>\$v\Ҽ3agMj
{corr3u 8>wfw3
xPyF;ͯF<1p]bù3{>Y9֢h&dMj;-`1k^^6B@L^evb{beEǏW*Z$cŒK3vʛDO8?id$~V?-a=8+YjX
I"y.5
vCm;R3C|#X\]y,۷Ϸ
?y"Vk/m,]D^'H!әGL0%ShQ34t] gx{iUZLߏlǃ ,}rr7RxGNvk=|:Èx|%s7{1`& {`.>Ӹ${'з d!w2zÒj
aKa3$Oqps *6Auvnnxqy47q[ȳTD! 3:K;r.l̙
{nKA^bd~:Tj{81lPѝЌ]WFFox%+Sٔ~+(`C[j!sz5Sf`8,뇩~D4,9Ptķ
.sboK((oBƽ4ĸe_Wlvm|}8.[_aGCwH v_;ظJckm#l2
B"7XXLf|lL=%CAE߫_+d?25 )nϏO
_@A6!?6kkHD5-(0R~2Jq]̖Qj~RGɛ:O/|[^?t165Íl<a}Dơ1Jlh07OCR5U1s,W 4U_
@*E$h"v4ʔBIHT<(,Cee~L{=o@
uϺuɾevøaۏeim
J4(&W_h(Eeb_Ȱ+?y)66RB#XVQ
ڑbWYAĄl;.-o#VTJ.~q|aet
^ݵhޡ%$>35ءfs"o
#yG^)U!7CIoXxo|],L՟):'Lw
.Lwekާ-hhbɖ8qţBB
A¾wϱj0Q'UTzNc"}ڂW^uÙV<0'7k|G|^U.Z!^wD'-ḱnXΝ<];[+wož&.&\tmXPwO
`"c
:HF*m$Nlsadk!:[n
@zPQ!y+fjE^-
'ENpϹ2$~/W{c)6qR8& Ҝ*mj?7@dZ3EX'N)VnP߶O?O[aFb.[@D!}qyՐZ k-ZW,? +0ސŁ!$%P>h~Cm$gŋKavv&[t-%r
M$N}Big1Ғ*hV#
v4/̓]zޮMO
3SԠ@A.Y ];=#f
yQmۆQ݁mXRx"^E*kґ~H3%l3^~Et؟{_]1
c 799P/ՔD3p.*d'-dcuű>ܻ@P`bn=`gtu`Bթe4"pۓ!<ѐexHmTeE,hp}Ow\Flq#ܗWP/"]
n V]ONҴ
q["$
FB`4wa%*gev`?Ru85-F` |防a\DE=ɣs1_$ț@OS57nY=
lfvhfp)p}4q
)K6.Bs1G2*ޭ3'eBv}#e0ɳL]2tGVpִa=ie|Q*?pR̛4~]v2~Ïb;nT}&dׁͶwPnKxZЉmĆVHyuX&0Xp{㩛[s*ڙ5i14b#^n'f$߸Շj~
N%}SrPP>0I{drE
®>FmJGA͑UkZӏ$[U#FNhf&P+JaUB>tTNgJ+2FJ9tys|ӏ5IQR**Z.TrRa|A{/\L>3C|{Qg."՚\d-8|\K_K3$O/WJ@Dq/g=O )Ed?Mʍ"o0"XHsS"s&ii5RJj3Fl\lyx6fjBP_j5՛LZE)3a
ź|RsAݳoi~!/c:>cB%vm}v/)Xgip:r3 OCԝr95}JP;j2m LX["h
OæHj;,< |UL3ײ;x8g47E$Az1%RiNaT>{ڤocT ~ef9=A,ED3ÐS.%gtWf.T,}¯ӟ*l6=xE,OMFz@- ҂Wo<.1,~Yߋ=^{:TbCoO,(
"}%$
xVIsVRr~X'Au<hWrEǴǳ=ϘcѼ Ɩl+{AУ3h0&1y8Ve6n\6u&R#pumɶ>dbA:cw_-k\^*`xmйՃxML\^SK gqvտNkDWCA?Ѻ~IٌqRxxTJ=fXB~dߎ%-%dѨV4I
r:;Pyat$+c\*< Sv\9;|`W?^ekgYɅE<:la"wP^WYh=y;<遳OXj{OrqSD*ٲpM]'g>M1k-x/,/ƶ[xY4~ơbr&]k#1};ePVT4 FrҘ(2OmSZp: 0ᕫ`5_+lSj~7Ū!\M*wGCٱ쏎JY,
w:Vp٬eqCXGyzKe
TV Iiۯ|^,,:"EpVEܝpGpVE=܌K6ldº¡m
gʆv݈);T=Õ)?DHw]YL]l`]lw)_9Ʌ%Uzn7q10(rMl4$UgW|r^>Rd$,^ =V$,WRQGk!w3AI\j#5r.L11V=HÀmN?Aίuk??{h*y1ɰ`|.nσoYu(O:V_zTyF)6w[YTM95zOέGנA("IW~LϤ ]GH31>_G#dȚWէGr be>|Q%ImDgYxrixA;J)Yߖ},}UHM4!:y{(*4Orۢlmw?]½IZ_r?"ՊXߠ}g]OsqAUXNJ>Q7tq_U°\xCmӖݰoLS%\:O$PКtvyfx\=J71u,kv~|onchUl|(.V
PP:J(Ddː.:yF \zp:n@lsV+A^OnךNӪO.m!=h2`c.yrO%o&4V1P ҙ
l"'v!71Hᒎ;1pP]f
Dm+Nq5BvV@73mƐ,tZv<[dh+(_v$=*Fpq@e ʦsO:1p\dBF岂|u(hQc0(քS-mܣ0BD'qp :zJ1&_Xjkw;;?Vݲ_JXVy,AV{d>"
dtt]YǾ.m>@_J\DH4prY?【3.Lw&ƈ@WNӵ/Ln<@ge37GEq.IL,jIt4ڜ?fB`&xAv3*apvv( T,U<-'hpZ*w?AӄzpkȢ)'ή%W8:eob1iBNϻYdSc5v\XY
:;zQ?qUjN]g$JǼmG=cd[n
\r$gٺ
k)D\jXMoSsic,O-^/{u)yQ<֟9uf߃f)/hulwfJ?.ߨnoWephӆuADx:G6{!NṁD#gRۘ/8Vn7BV>)8@
E.R˴6
({܉PVgib~D}`=#H-'dN6ov*Kqcݏe}HcO5fB#Pη_Gńo%rѶ5PvF =Xe_/kᕅK7l|KM02E֯SAM$XYȀ8AZO;[~L
{@JYu|}5H^XLk2ub==]J)P\ۍcNk
w5"4,h>;mu~h&\z轤bK|K>=u!eLegN%^%X>YFn¶-"]&47N-D lѣ-;kJS
p E!7[^֠Ȱ[WzeN-ɠ8&^'!eء=om=B|lq1O҇
jk()lƍ#abViwNKW1#ᢿV:&<(i6>q"n:ͩce !pU}n`XKe+Y[gNz.n'ǞJ5ax埁l*b*B2.0Aq=5>f7ߟ]nz,)BXv"Cb雰1"+$vc!=Z
'7DW;jsǓd.D'Vu7~@+
{3l$sh@V%<3~7I.#QP:1YUylx*+YԬ{
Qo%#i}=utu*Bg췔[f#!\˾@+#C&ϏM%1#^ۆܛq?pylr4 wDFbL+mkh+RGcZoV'R\^
N:kKU+|VF3[M_𰚚!3Z-)H ୌa(RW-ע!\iRD]f ay4Մ[k)onHS!?jA݊nʟ!R]LSX
(؍xqOgPbA-ZXMTuz"_!] fuɔ! 9KK8Q
P}ݚh6]j*=
98tQ.w&R^mj*sFackg-Hţ J^qQуY;=x\*ZHY 6C]2YDa}JkWaOݏ
k):&w[ #v/(Gԃz"6hCę.&ڏ*u%VڱIaGeU$:ّL`F4mIC9KV4T:kvNek)Լ[1:v2ﯜr1hhPy߃L/@Iڂ72vR)z^*i2Oq;wgi0,sE
vm(*"`>_C|EH]}/.ӚuH29d;ujFaOׅN?L 2EB̛ʑhl오ØD)QV{)\W%JQA1wK&?*H^VIhA|חjeK7kiz}hjM lAۄI|k
vdC|^.+8?/#QqF7'VOh!l岼ah[}B
S~-&J_J1R<}%;!;>@ŧm!R4i:*FRqS]0`c8vV6]jRv[T#6:=+';N>!gY1M
bdXN.[.dD^X2surr_code/surr_code/data/gp_job.m0000640000175000017500000000377411361665706016011 0ustar iam23iam23% First stab. First classify into "fire" vs "no fire" and then regress on just
% the "fire" class. These tasks are probably related, but stuff that for now.
function gp_play()
addpath('../experiment_toolbox');
name = 'gp_play';
num_runs = 30;
success = experiment_run(name, num_runs, @one_run);
function result = one_run()
K = 10; % Cross-validation experiment with naive median predictor
[xx,yy] = read_forestfires();
N = size(xx, 2);
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
y_batches = make_batched(yy', K);
x_batches = make_batched(xx, K);
score = 0;
for bb = 1:K
train_x = cell2mat(x_batches([1:bb-1,bb+1:K]));
train_y = cell2mat(y_batches([1:bb-1,bb+1:K]))';
test_x = x_batches{bb};
test_y = y_batches{bb}';
pred = blah(train_x, train_y, test_x);
score = score + mean(abs(test_y-pred));
end
score = score / K;
result.score = score;
function pred = blah(xx, yy, test_x)
use_dims = [9,10,11,12];
xx = xx(use_dims, :);
test_x = test_x(use_dims, :);
std_fn = get_standardize_fns(xx);
xx = std_fn(xx);
test_x = std_fn(test_x);
% Classification:
cc = (yy > min(yy))*2 - 1;
loghyper = [0.0; 0.0];
loghyper = minimize(loghyper, 'binaryEPGP', -100, 'covSEiso', xx', cc);
cpred = binaryEPGP(loghyper, 'covSEiso', xx', cc, test_x');
% Work out quantile of regression need to get median. If "no fire" has
% more than 0.5 probability then predict zero.
quantile = cpred - 0.5;
pred = zeros(size(test_x, 2), 1);
mask = (quantile > 0);
quantile = quantile(mask);
test_x = test_x(:, mask);
% Regression
idx = (yy > min(yy));
xx = xx(:, idx);
yy = log(yy(idx));
[std_fn, destd_fn] = get_standardize_fns(yy');
yy = std_fn(yy);
% GP regression:
covfunc = {'covSum', {'covSEiso','covNoise'}};
loghyper = [log(1.0); log(1.0); log(0.5)];
loghyper = minimize(loghyper, 'gpr', -100, covfunc, xx', yy);
[mu, S2] = gpr(loghyper, covfunc, xx', yy, test_x');
point_est = norminvcdf(quantile, sqrt(S2), mu);
% Get point_est that is required quantile through Gaussian prediction.
pred(mask) = exp(destd_fn(point_est));
surr_code/surr_code/data/read_forestfires.m0000640000175000017500000000040711361404022020043 0ustar iam23iam23function [xx,yy] = read_forestfires()
%READ_FORESTFIRES dataset
%
% [xx,yy] = read_forestfires();
%
% Outputs:
% xx DxN
% yy Nx1
% Iain Murray, April 2010
data = csvread('forestfires.csv',1,0); % N x (D+1)
yy = data(:,end);
xx = data(:,1:end-1)';
surr_code/surr_code/data/results/0000750000175000017500000000000011361740144016046 5ustar iam23iam23surr_code/surr_code/data/playing.m0000640000175000017500000000112611361410502016157 0ustar iam23iam23clear;
[xx,yy] = read_forestfires();
% Best result used SVM on just (temp, RH, wind, rain):
%xx = xx([9,10,11,12], :);
% Add constant feature for bias weight to hit:
xx = [xx; ones(1, size(xx,2))];
%yy = log(1+yy);
yy = log(yy);
yy(yy==-Inf) = min(yy(yy>min(yy))) - 1; % Just for vis purposes
% Linear regression on just non-zero outputs, to get some direction for
% visualization:
idx = ~(yy==min(yy));
xx2 = xx(:,idx);
yy2 = yy(idx);
%ww = xx2'\yy2;
%ww = xx'\yy;
ww = xx'\(yy>min(yy)); % Set direction just to separate fire vs non-fire
clf;
%plot(xx2'*ww, yy2, 'x');
plot(xx'*ww, yy, 'r.');
surr_code/surr_code/data/ionosphere.data0000644000175000017500000022526311415646336017402 0ustar iam23iam231,0,0.99539,-0.05889,0.85243,0.02306,0.83398,-0.37708,1,0.03760,0.85243,-0.17755,0.59755,-0.44945,0.60536,-0.38223,0.84356,-0.38542,0.58212,-0.32192,0.56971,-0.29674,0.36946,-0.47357,0.56811,-0.51171,0.41078,-0.46168,0.21266,-0.34090,0.42267,-0.54487,0.18641,-0.45300,1
1,0,1,-0.18829,0.93035,-0.36156,-0.10868,-0.93597,1,-0.04549,0.50874,-0.67743,0.34432,-0.69707,-0.51685,-0.97515,0.05499,-0.62237,0.33109,-1,-0.13151,-0.45300,-0.18056,-0.35734,-0.20332,-0.26569,-0.20468,-0.18401,-0.19040,-0.11593,-0.16626,-0.06288,-0.13738,-0.02447,0
1,0,1,-0.03365,1,0.00485,1,-0.12062,0.88965,0.01198,0.73082,0.05346,0.85443,0.00827,0.54591,0.00299,0.83775,-0.13644,0.75535,-0.08540,0.70887,-0.27502,0.43385,-0.12062,0.57528,-0.40220,0.58984,-0.22145,0.43100,-0.17365,0.60436,-0.24180,0.56045,-0.38238,1
1,0,1,-0.45161,1,1,0.71216,-1,0,0,0,0,0,0,-1,0.14516,0.54094,-0.39330,-1,-0.54467,-0.69975,1,0,0,1,0.90695,0.51613,1,1,-0.20099,0.25682,1,-0.32382,1,0
1,0,1,-0.02401,0.94140,0.06531,0.92106,-0.23255,0.77152,-0.16399,0.52798,-0.20275,0.56409,-0.00712,0.34395,-0.27457,0.52940,-0.21780,0.45107,-0.17813,0.05982,-0.35575,0.02309,-0.52879,0.03286,-0.65158,0.13290,-0.53206,0.02431,-0.62197,-0.05707,-0.59573,-0.04608,-0.65697,1
1,0,0.02337,-0.00592,-0.09924,-0.11949,-0.00763,-0.11824,0.14706,0.06637,0.03786,-0.06302,0,0,-0.04572,-0.15540,-0.00343,-0.10196,-0.11575,-0.05414,0.01838,0.03669,0.01519,0.00888,0.03513,-0.01535,-0.03240,0.09223,-0.07859,0.00732,0,0,-0.00039,0.12011,0
1,0,0.97588,-0.10602,0.94601,-0.20800,0.92806,-0.28350,0.85996,-0.27342,0.79766,-0.47929,0.78225,-0.50764,0.74628,-0.61436,0.57945,-0.68086,0.37852,-0.73641,0.36324,-0.76562,0.31898,-0.79753,0.22792,-0.81634,0.13659,-0.82510,0.04606,-0.82395,-0.04262,-0.81318,-0.13832,-0.80975,1
0,0,0,0,0,0,1,-1,0,0,-1,-1,0,0,0,0,1,1,-1,-1,0,0,0,0,1,1,1,1,0,0,1,1,0,0,0
1,0,0.96355,-0.07198,1,-0.14333,1,-0.21313,1,-0.36174,0.92570,-0.43569,0.94510,-0.40668,0.90392,-0.46381,0.98305,-0.35257,0.84537,-0.66020,0.75346,-0.60589,0.69637,-0.64225,0.85106,-0.65440,0.57577,-0.69712,0.25435,-0.63919,0.45114,-0.72779,0.38895,-0.73420,1
1,0,-0.01864,-0.08459,0,0,0,0,0.11470,-0.26810,-0.45663,-0.38172,0,0,-0.33656,0.38602,-0.37133,0.15018,0.63728,0.22115,0,0,0,0,-0.14803,-0.01326,0.20645,-0.02294,0,0,0.16595,0.24086,-0.08208,0.38065,0
1,0,1,0.06655,1,-0.18388,1,-0.27320,1,-0.43107,1,-0.41349,0.96232,-0.51874,0.90711,-0.59017,0.89230,-0.66474,0.69876,-0.70997,0.70645,-0.76320,0.63081,-0.80544,0.55867,-0.89128,0.47211,-0.86500,0.40303,-0.83675,0.30996,-0.89093,0.22995,-0.89158,1
1,0,1,-0.54210,1,-1,1,-1,1,0.36217,1,-0.41119,1,1,1,-1,1,-0.29354,1,-0.93599,1,1,1,1,1,-0.40888,1,-0.62745,1,-1,1,-1,1,-1,0
1,0,1,-0.16316,1,-0.10169,0.99999,-0.15197,1,-0.19277,0.94055,-0.35151,0.95735,-0.29785,0.93719,-0.34412,0.94486,-0.28106,0.90137,-0.43383,0.86043,-0.47308,0.82987,-0.51220,0.84080,-0.47137,0.76224,-0.58370,0.65723,-0.68794,0.68714,-0.64537,0.64727,-0.67226,1
1,0,1,-0.86701,1,0.22280,0.85492,-0.39896,1,-0.12090,1,0.35147,1,0.07772,1,-0.14767,1,-1,1,-1,0.61831,0.15803,1,0.62349,1,-0.17012,1,0.35924,1,-0.66494,1,0.88428,1,-0.18826,0
1,0,1,0.07380,1,0.03420,1,-0.05563,1,0.08764,1,0.19651,1,0.20328,1,0.12785,1,0.10561,1,0.27087,1,0.44758,1,0.41750,1,0.20033,1,0.36743,0.95603,0.48641,1,0.32492,1,0.46712,1
1,0,0.50932,-0.93996,1,0.26708,-0.03520,-1,1,-1,0.43685,-1,0,0,-1,-0.34265,-0.37681,0.03623,1,-1,0,0,0,0,-0.16253,0.92236,0.39752,0.26501,0,0,1,0.23188,0,0,0
1,0,0.99645,0.06468,1,-0.01236,0.97811,0.02498,0.96112,0.02312,0.99274,0.07808,0.89323,0.10346,0.94212,0.05269,0.88809,0.11120,0.86104,0.08631,0.81633,0.11830,0.83668,0.14442,0.81329,0.13412,0.79476,0.13638,0.79110,0.15379,0.77122,0.15930,0.70941,0.12015,1
0,0,0,0,-1,-1,1,1,-1,1,-1,1,1,-1,1,1,-1,-1,-1,1,1,-1,-1,1,-1,1,1,-1,-1,1,-1,-1,1,-1,0
1,0,0.67065,0.02528,0.66626,0.05031,0.57197,0.18761,0.08776,0.34081,0.63621,0.12131,0.62099,0.14285,0.78637,0.10976,0.58373,0.18151,0.14395,0.41224,0.53888,0.21326,0.51420,0.22625,0.48838,0.23724,0.46167,0.24618,0.43433,0.25306,0.40663,0.25792,1,0.33036,1
0,0,1,-1,0,0,0,0,1,1,1,-1,-0.71875,1,0,0,-1,1,1,1,-1,1,1,0.56250,-1,1,1,1,1,-1,1,1,1,1,0
1,0,1,-0.00612,1,-0.09834,1,-0.07649,1,-0.10605,1,-0.11073,1,-0.39489,1,-0.15616,0.92124,-0.31884,0.86473,-0.34534,0.91693,-0.44072,0.96060,-0.46866,0.81874,-0.40372,0.82681,-0.42231,0.75784,-0.38231,0.80448,-0.40575,0.74354,-0.45039,1
0,0,1,1,0,0,0,0,-1,-1,0,0,0,0,-1,-1,-1,-1,-1,1,-1,1,0,0,0,0,1,-1,-1,1,-1,1,-1,1,0
1,0,0.96071,0.07088,1,0.04296,1,0.09313,0.90169,-0.05144,0.89263,0.02580,0.83250,-0.06142,0.87534,0.09831,0.76544,0.00280,0.75206,-0.05295,0.65961,-0.07905,0.64158,-0.05929,0.55677,-0.07705,0.58051,-0.02205,0.49664,-0.01251,0.51310,-0.00015,0.52099,-0.00182,1
0,0,-1,1,0,0,0,0,-1,1,1,1,0,0,0,0,1,-1,-1,1,1,1,0,0,-1,-1,1,-1,1,1,-1,1,0,0,0
1,0,1,-0.06182,1,0.02942,1,-0.05131,1,-0.01707,1,-0.11726,0.84493,-0.05202,0.93392,-0.06598,0.69170,-0.07379,0.65731,-0.20367,0.94910,-0.31558,0.80852,-0.31654,0.84932,-0.34838,0.72529,-0.29174,0.73094,-0.38576,0.54356,-0.26284,0.64207,-0.39487,1
1,0,1,0.57820,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-0.62796,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,0
1,0,1,-0.08714,1,-0.17263,0.86635,-0.81779,0.94817,0.61053,0.95473,-0.41382,0.88486,-0.31736,0.87937,-0.23433,0.81051,-0.62180,0.12245,-1,0.90284,0.11053,0.62357,-0.78547,0.55389,-0.82868,0.48136,-0.86583,0.40650,-0.89674,0.32984,-0.92128,-0.13341,-1,1
0,0,-1,-1,0,0,-1,1,1,-0.37500,0,0,0,0,0,0,1,-1,-1,-1,1,-1,0,0,1,-1,-1,1,-1,-1,0,0,-1,1,0
1,0,1,0.08380,1,0.17387,1,-0.13308,0.98172,0.64520,1,0.47904,1,0.59113,1,0.70758,1,0.82777,1,0.95099,1,1,0.98042,1,0.91624,1,0.83899,1,0.74822,1,0.64358,1,0.52479,1,1
0,0,-1,-1,1,1,1,-1,-1,1,1,-1,-1,-1,0,0,1,1,-1,-1,1,-1,1,-1,1,1,1,-1,1,-1,-1,1,1,-1,0
1,0,1,-0.14236,1,-0.16256,1,-0.23656,1,-0.07514,1,-0.25010,1,-0.26161,1,-0.21975,1,-0.38606,1,-0.46162,1,-0.35519,1,-0.59661,1,-0.47643,0.98820,-0.49687,1,-0.75820,1,-0.75761,1,-0.84437,1
1,0,1,-1,1,1,1,-1,1,-1,1,-1,1,-0.01840,1,-1,1,1,1,-0.85583,1,1,1,-1,0,0,1,1,1,-0.79141,1,1,1,1,0
1,0,0.88208,-0.14639,0.93408,-0.11057,0.92100,-0.16450,0.88307,-0.17036,0.88462,-0.31809,0.85269,-0.31463,0.82116,-0.35924,0.80681,-0.33632,0.75243,-0.47022,0.70555,-0.47153,0.66150,-0.50085,0.61297,-0.48086,0.56804,-0.54629,0.50179,-0.59854,0.47075,-0.57377,0.42189,-0.58086,1
1,0,0.71253,-0.02595,0.41287,-0.23067,0.98019,-0.09473,0.99709,-0.10236,1,-0.10951,0.58965,1,0.83726,-1,0.82270,-0.17863,0.80760,-0.28257,-0.25914,0.92730,0.51933,0.05456,0.65493,-0.20392,0.93124,-0.41307,0.63811,-0.21901,0.86136,-0.87354,-0.23186,-1,0
1,0,1,-0.15899,0.72314,0.27686,0.83443,-0.58388,1,-0.28207,1,-0.49863,0.79962,-0.12527,0.76837,0.14638,1,0.39337,1,0.26590,0.96354,-0.01891,0.92599,-0.91338,1,0.14803,1,-0.11582,1,-0.11129,1,0.53372,1,-0.57758,1
1,0,0.66161,-1,1,1,1,-0.67321,0.80893,-0.40446,1,-1,1,-0.89375,1,0.73393,0.17589,0.70982,1,0.78036,1,0.85268,1,-1,1,0.85357,1,-0.08571,0.95982,-0.36250,1,0.65268,1,0.34732,0
1,0,1,0.00433,1,-0.01209,1,-0.02960,1,-0.07014,0.97839,-0.06256,1,-0.06544,0.97261,-0.07917,0.92561,-0.13665,0.94184,-0.14327,0.99589,-0.14248,0.94815,-0.13565,0.89469,-0.20851,0.89067,-0.17909,0.85644,-0.18552,0.83777,-0.20101,0.83867,-0.20766,1
0,0,1,1,1,-1,0,0,0,0,-1,-1,0,0,0,0,-1,1,1,1,-1,1,-1,1,1,-1,1,1,-1,1,1,1,0,0,0
1,0,0.91241,0.04347,0.94191,0.02280,0.94705,0.05345,0.93582,0.01321,0.91911,0.06348,0.92766,0.12067,0.92048,0.06211,0.88899,0.12722,0.83744,0.14439,0.80983,0.11849,0.77041,0.14222,0.75755,0.11299,0.73550,0.13282,0.66387,0.15300,0.70925,0.10754,0.65258,0.11447,1
1,0,1,0.02461,0.99672,0.04861,0.97545,0.07143,0.61745,-1,0.91036,0.11147,0.88462,0.53640,0.82077,0.14137,0.76929,0.15189,1,0.41003,0.65850,0.16371,0.60138,0.16516,0.54446,0.16390,0.48867,0.16019,0.43481,0.15436,0.38352,0.14677,1,1,0
1,0,1,0.06538,1,0.20746,1,0.26281,0.93051,0.32213,0.86773,0.39039,0.75474,0.50082,0.79555,0.52321,0.65954,0.60756,0.57619,0.62999,0.47807,0.67135,0.40553,0.68840,0.34384,0.72082,0.27712,0.72386,0.19296,0.70682,0.11372,0.72688,0.06990,0.71444,1
1,0,-1,-1,1,1,1,-0.14375,0,0,-1,1,1,1,0.17917,-1,-1,-1,0.08750,-1,1,-1,-1,1,-1,-1,1,-1,-1,-1,1,1,0,0,0
1,0,0.90932,0.08791,0.86528,0.16888,1,0.16598,0.55187,0.68154,0.70207,0.36719,0.16286,0.42739,0.57620,0.46086,0.51067,0.49618,0.31639,0.12967,0.37824,0.54462,0.31274,0.55826,0.24856,0.56527,0.18626,0.56605,0.12635,0.56101,0.06927,0.55061,0.12137,0.67739,1
1,0,-0.64286,-1,1,0.82857,1,-1,1,-0.23393,1,0.96161,1,-0.37679,1,-1,1,0.13839,1,-1,1,-0.03393,-0.84286,1,0.53750,0.85714,1,1,1,-1,1,-1,1,-1,0
1,0,0.99025,-0.05785,0.99793,-0.13009,0.98663,-0.19430,0.99374,-0.25843,0.92738,-0.30130,0.92651,-0.37965,0.89812,-0.43796,0.84922,-0.52064,0.87433,-0.57075,0.79016,-0.59839,0.74725,-0.64615,0.68282,-0.68479,0.65247,-0.73174,0.61010,-0.75353,0.54752,-0.80278,0.49195,-0.83245,1
0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,-0.37500,-1,-1,-1,0,0,0,0,-1,-1,-1,-1,-1,1,1,0,0,0,0
1,0,1,-0.03730,1,-0.07383,0.99601,-0.11039,0.99838,-0.09931,0.98941,-0.13814,0.96674,-0.21695,0.95288,-0.25099,0.91236,-0.34400,0.90581,-0.32152,0.89991,-0.34691,0.87874,-0.37643,0.86213,-0.42990,0.83172,-0.43122,0.81433,-0.42593,0.77919,-0.47977,0.75115,-0.50152,1
1,0,0.94598,-0.02685,-1,0.26131,-0.36393,0.35639,0.69258,-0.63427,1,-0.03353,-0.29020,-0.00550,-0.54852,0.15452,0.91921,-0.46270,1,-0.50424,-0.29735,-0.31454,-0.73864,0.37361,0.83872,-0.46734,0.52208,-0.58130,1,-0.61393,-0.09634,0.20477,-0.06117,0.41913,0
1,0,0.98166,0.00874,0.98103,-0.03818,0.97565,-0.05699,0.95947,-0.06971,0.99004,-0.04507,0.94713,-0.11102,0.93369,-0.12790,0.94217,-0.11583,0.79682,-0.19200,0.88274,-0.17387,0.86257,-0.18739,0.88487,-0.19689,0.81813,-0.21136,0.78546,-0.23864,0.76911,-0.23095,0.74323,-0.23902,1
1,0,0,0,1,0.51724,0,0,0.10991,-1,0,0,0,0,-1,-0.22414,-0.55711,-0.83297,0.76940,0.63147,0,0,0.53448,0.35668,-0.90302,0.44828,1,-1,-1,0.81573,0,0,0,0,0
1,0,0.84134,-0.18362,0.43644,0.02919,0.93421,-0.00267,0.87947,0.13795,0.81121,-0.01789,0.88559,0.54991,0.91714,-0.57486,0.75000,-0.29520,0.86676,-0.20104,1,1,0.46610,-0.16290,0.90066,-0.02778,0.93358,-0.01158,0.61582,-0.32298,0.84463,-0.25706,0.93323,-0.01425,1
0,0,1,1,1,-1,0,0,0,0,1,1,1,1,-1,-1,1,-1,-1,1,0,0,1,-1,1,-1,1,1,-1,-1,0,0,0,0,0
1,0,1,1,1,1,0.91010,1,-0.26970,1,-0.83152,1,-1,1,-1,0.72526,-1,-0.57779,-1,-0.42052,-1,-1,-0.52838,-1,0.90014,-1,1,-1,1,-1,1,-0.34686,1,0.34845,1
1,0,-0.67935,-1,-1,1,1,0.63317,0.03515,-1,-1,-1,1,1,0.88683,-1,-1,1,0.83840,1,1,-1,-1,-1,-0.18856,1,1,-1,-1,-1,-1,1,1,0.33611,0
1,0,0.95659,0.08143,0.97487,-0.05667,0.97165,-0.08484,0.96097,-0.06561,0.94717,0.01279,0.95436,-0.16795,0.94612,-0.19497,0.99630,-0.32268,0.90343,-0.35902,0.91428,-0.27316,0.90140,-0.29807,0.99899,-0.40747,0.87244,-0.34586,0.92059,-0.30619,0.83951,-0.39061,0.82166,-0.41173,1
1,0,0.08333,-0.20685,-1,1,-1,1,0.71875,0.47173,-0.82143,-0.62723,-1,-1,-1,1,-0.02753,0.59152,-0.42113,-0.42113,-0.74628,-1,-1,-0.46801,-1,0.23810,1,-1,-1,-0.38914,-1,-1,-1,0.61458,0
1,0,1,-0.02259,1,-0.04494,1,-0.06682,1,-0.08799,1,0.56173,1,-0.12738,1,-0.14522,1,0.32407,1,-0.17639,0.99484,-0.18949,0.95601,-0.20081,1,-0.92284,0.87280,-0.21793,0.82920,-0.22370,0.78479,-0.22765,0.73992,-0.22981,1
0,0,-1,1,1,-1,-1,1,0,0,1,1,-1,-0.18750,1,1,-1,-1,1,-1,-1,-1,1,1,1,-1,1,1,1,1,0,0,-1,-1,0
1,0,1,0.05812,0.94525,0.07418,0.99952,0.13231,1,-0.01911,0.94846,0.07033,0.95713,0.14644,0.94862,0.11224,0.90896,0.20119,0.96741,0.16265,0.99695,0.14258,0.90784,0.16410,0.91667,0.22431,0.88423,0.23571,0.88568,0.22511,0.78324,0.29576,0.83574,0.31166,1
1,0,0.17188,-1,-1,1,0,0,0,0,-1,1,0,0,-0.61354,-0.67708,0.80521,0.36146,0.51979,0.14375,0,0,-1,-0.27083,-0.84792,0.96250,1,1,-1,0.67708,0,0,0,0,0
1,0,1,0.09771,1,0.12197,1,0.22574,0.98602,0.09237,0.94930,0.19211,0.92992,0.24288,0.89241,0.28343,0.85529,0.26721,0.83656,0.33129,0.83393,0.31698,0.74829,0.39597,0.76193,0.34658,0.68452,0.42746,0.62764,0.46031,0.56791,0.47033,0.54252,0.50903,1
1,0,0.01667,-0.35625,0,0,0,0,0,0,0,0,0,0,0.12292,-0.55000,0.22813,0.82813,1,-0.42292,0,0,0.08333,-1,-0.10625,-0.16667,1,-0.76667,-1,0.18854,0,0,1,-0.27292,0
1,0,1,0.16801,0.99352,0.16334,0.94616,0.33347,0.91759,0.22610,0.91408,0.37107,0.84250,0.46899,0.81011,0.49225,0.78473,0.48311,0.65091,0.56977,0.56553,0.58071,0.55586,0.64720,0.48311,0.55236,0.43317,0.69129,0.35684,0.76147,0.33921,0.66844,0.22101,0.78685,1
1,0,0.63816,1,0.20833,-1,1,1,0.87719,0.30921,-0.66886,1,-0.05921,0.58772,0.01754,0.05044,-0.51535,-1,0.14254,-0.03289,0.32675,-0.43860,-1,1,0.80921,-1,1,-0.06140,1,1,0.20614,-1,1,1,0
1,0,1,-0.41457,1,0.76131,0.87060,0.18593,1,-0.09925,0.93844,0.47990,0.65452,-0.16080,1,0.00879,0.97613,-0.50126,0.80025,-0.24497,0.88065,-0.19095,1,-0.12312,0.93593,0.10678,0.92890,-0.07249,1,-0.27387,0.43970,0.19849,0.51382,-0.05402,1
1,0,0.84783,0.10598,1,0.39130,1,-1,0.66938,0.08424,1,0.27038,1,0.60598,1,0.35507,1,0.02672,0.58424,-0.43025,1,0.63496,0.89130,0.26585,0.91033,-0.33333,1,0.15942,0.37681,-0.01947,1,0.22464,1,0.37409,0
1,0,1,0.28046,1,0.02477,1,0.07764,1,0.04317,0.98762,0.33266,1,0.05489,1,0.04384,0.95750,-0.24598,0.84371,-0.08668,1,0.04150,0.99933,0.27376,1,-0.39056,0.96414,-0.02174,0.86747,0.23360,0.94578,-0.22021,0.80355,-0.07329,1
0,0,1,-1,1,-1,1,-1,1,-1,1,1,1,1,1,-1,1,1,1,1,1,1,1,-1,1,-1,1,-1,1,0.65625,0,0,1,-1,0
1,0,1,0.67784,0.81309,0.82021,0.43019,1,0.20619,0.80541,-0.43872,1,-0.79135,0.77092,-1,0.40268,-0.39046,-0.58634,-0.97907,-0.42822,-0.73083,-0.76339,-0.37671,-0.97491,0.41366,-1,0.41778,-0.93296,0.25773,-1,0.93570,-0.35222,0.98816,0.03446,1
1,0,1,1,1,-1,1,-1,1,1,1,1,1,1,1,-1,1,1,1,1,1,1,1,1,1,1,1,0.5,0,0,1,-1,1,-1,0
1,0,1,0.03529,1,0.18281,1,0.26968,1,0.25068,1,0.28778,1,0.38643,1,0.31674,1,0.65701,1,0.53846,1,0.61267,1,0.59457,0.89593,0.68326,0.89502,0.71374,0.85611,0.67149,0.74389,0.85611,0.71493,0.75837,1
0,0,1,-1,1,1,-1,-1,1,-1,0,0,0,0,-1,1,1,-1,1,-1,-0.75000,1,1,-1,1,-1,1,-1,-1,-1,0,0,1,-1,0
1,0,0.96087,0.08620,0.96760,0.19279,0.96026,0.27451,0.98044,0.35052,0.92867,0.46281,0.86265,0.52517,0.82820,0.58794,0.73242,0.69065,0.69003,0.73140,0.54473,0.68820,0.48339,0.76197,0.40615,0.74689,0.33401,0.83796,0.24944,0.86061,0.13756,0.86835,0.09048,0.86285,1
1,0,0.69444,0.38889,0,0,-0.32937,0.69841,0,0,0,0,0,0,0.20635,-0.24206,0.21032,0.19444,0.46429,0.78175,0,0,0,0,0.73413,0.27381,0.76190,0.63492,0,0,0,0,0,0,0
1,0,1,0.05070,1,0.10827,1,0.19498,1,0.28453,1,0.34826,1,0.38261,0.94575,0.42881,0.89126,0.50391,0.75906,0.58801,0.80644,0.59962,0.79578,0.62758,0.66643,0.63942,0.59417,0.69435,0.49538,0.72684,0.47027,0.71689,0.33381,0.75243,1
0,0,1,1,0,0,1,-1,1,-1,1,1,1,1,1,-1,1,1,1,1,1,-1,-1,-1,1,-1,1,-1,1,1,0,0,1,-1,0
1,0,1,0.04078,1,0.11982,1,0.16159,1,0.27921,0.98703,0.30889,0.92745,0.37639,0.91118,0.39749,0.81939,0.46059,0.78619,0.46994,0.79400,0.56282,0.70331,0.58129,0.67077,0.59723,0.58903,0.60990,0.53952,0.60932,0.45312,0.63636,0.40442,0.62658,1
0,0,1,1,1,-1,1,1,1,1,1,1,1,1,1,1,1,-1,-1,1,-1,1,-1,1,1,-1,1,1,-1,1,-1,-1,-1,1,0
1,0,1,0.24168,1,0.48590,1,0.72973,1,1,1,1,1,1,1,0.77128,1,1,1,1,0.74468,1,0.89647,1,0.64628,1,0.38255,1,0.10819,1,-0.17370,1,-0.81383,1,1
0,0,1,1,1,-1,1,1,-1,1,0,0,1,1,0,0,0,0,-1,1,-1,1,1,1,1,-1,1,1,1,1,1,-1,-1,1,0
1,0,1,-0.06604,1,0.62937,1,0.09557,1,0.20280,1,-1,1,-0.40559,1,-0.15851,1,0.04895,1,-0.61538,1,-0.26573,1,-1,1,-0.58042,1,-0.81372,1,-1,1,-0.78555,1,-0.48252,1
0,0,1,-1,1,1,1,1,1,1,1,1,1,-1,1,-1,1,1,1,-1,1,1,1,1,1,-1,1,1,1,-1,1,1,1,-1,0
1,0,0.92277,0.07804,0.92679,0.16251,0.89702,0.24618,0.84111,0.35197,0.78801,0.42196,0.70716,0.46983,0.70796,0.56476,0.60459,0.64200,0.51247,0.64924,0.39903,0.66975,0.34232,0.68343,0.23693,0.76146,0.18765,0.73885,0.09694,0.71038,0.02735,0.77072,-0.04023,0.69509,1
1,0,0.68198,-0.17314,0.82332,0.21908,0.46643,0.32862,0.25795,0.58304,1,-0.15194,0.01060,0.44523,0.01060,0.38869,0.18681,0.41168,0.10567,0.36353,0.04325,0.30745,-0.00083,0.24936,-0.02862,0.19405,-0.04314,0.14481,-0.04779,0.10349,-0.04585,0.07064,-0.04013,0.04586,0
1,0,0.74852,-0.02811,0.65680,-0.05178,0.80621,0.02811,0.85947,0.02515,0.63462,0.08728,0.71598,0.07840,0.73077,0.05178,0.78550,-0.27811,0.65976,-0.01479,0.78698,0.06953,0.34615,-0.18639,0.65385,0.02811,0.61009,-0.06637,0.53550,-0.21154,0.59024,-0.14053,0.56361,0.02959,1
1,0,0.39179,-0.06343,0.97464,0.04328,1,1,0.35821,0.15299,0.54478,0.13060,0.61567,-0.82090,0.57836,0.67910,0.66791,-0.10448,0.46642,-0.11567,0.65574,0.14792,0.83209,0.45522,0.47015,0.16418,0.49309,0.14630,0.32463,-0.02612,0.39118,0.13521,0.34411,0.12755,0
1,0,0.67547,0.04528,0.76981,-0.10566,0.77358,0.03774,0.66038,-0.04528,0.64528,0.01132,0.66792,-0.13962,0.72075,-0.02264,0.76981,0.08679,0.61887,-0.07925,0.75849,-0.23774,0.73962,-0.14717,0.84906,-0.15094,0.73886,-0.05801,0.66792,0.02264,0.86415,0.03774,0.73208,0.00755,1
1,0,0.72727,-0.05000,0.89241,0.03462,1,0.72727,0.66364,-0.05909,0.48182,-0.16818,0.81809,0.09559,0.56818,1,0.50455,0.21818,0.66818,0.10000,1,-0.30000,0.98636,-1,0.57273,0.32727,0.56982,0.14673,0.42273,0.08182,0.48927,0.14643,1,1,0
1,0,0.57647,-0.01569,0.40392,0,0.38431,0.12941,0.40000,-0.05882,0.56471,0.14118,0.46667,0.08235,0.52549,-0.05490,0.58039,0.01569,0.50196,0,0.45882,0.06667,0.58039,0.08235,0.49804,0.00392,0.48601,0.10039,0.46275,0.08235,0.45098,0.23529,0.43137,0.17255,1
1,0,0.41932,0.12482,0.35000,0.12500,0.23182,0.27955,-0.03636,0.44318,0.04517,0.36194,-0.19091,0.33636,-0.13350,0.27322,0.02727,0.40455,-0.34773,0.12727,-0.20028,0.05078,-0.18636,0.36364,-0.14003,-0.04802,-0.09971,-0.07114,-1,-1,-0.02916,-0.07464,-0.00526,-0.06314,0
1,0,0.88305,-0.21996,1,0.36373,0.82403,0.19206,0.85086,0.05901,0.90558,-0.04292,0.85193,0.25000,0.77897,0.25322,0.69206,0.57940,0.71030,0.39056,0.73176,0.27575,1,0.34871,0.56760,0.52039,0.69811,0.53235,0.80901,0.58584,0.43026,0.70923,0.52361,0.54185,1
1,0,0.84557,-0.08580,-0.31745,-0.80553,-0.08961,-0.56435,0.80648,0.04576,0.89514,-0.00763,-0.18494,0.63966,-0.20019,-0.68065,0.85701,-0.11344,0.77979,-0.15729,-0.06959,0.50810,-0.34128,0.80934,0.78932,-0.03718,0.70882,-0.25288,0.77884,-0.14109,-0.21354,-0.78170,-0.18494,-0.59867,0
1,0,0.70870,-0.24783,0.64348,0.04348,0.45217,0.38261,0.65217,0.18261,0.5,0.26957,0.57826,-0.23043,0.50435,0.37826,0.38696,-0.42609,0.36087,-0.26087,0.26957,0.11739,0.53246,-0.03845,0.31304,-0.12174,0.49930,-0.04264,0.48348,-0.04448,0.64348,-0.25217,0.50435,0.14783,1
1,0,-0.54180,0.14861,-0.33746,0.73375,0.52012,-0.13932,0.31889,-0.06811,0.20743,-0.15170,0.47368,0.08978,0.56347,-0.15480,0.16409,0.45201,0.33746,0.03406,0.50464,0.07121,-0.63777,-0.61610,1,0.65635,0.41348,-0.40116,-0.15170,0.11146,0.02399,0.55820,0.52632,-0.08978,0
1,0,0.29202,0.13582,0.45331,0.16808,0.51783,-0.00509,0.52632,0.20883,0.52462,-0.16638,0.47368,-0.04754,0.55518,0.03905,0.81664,-0.22411,0.42445,-0.04244,0.34975,0.06621,0.28183,-0.20883,0.51731,-0.03176,0.50369,-0.03351,0.34635,0.09847,0.70798,-0.01868,0.39559,-0.03226,1
1,0,0.79157,0.16851,0,0,0.56541,0.06874,0.39468,1,0.38359,0.99557,-0.02439,0.53215,0.23725,0.12860,-0.02661,0.95122,-0.50998,0.84922,-0.10200,0.38803,-0.42572,0.23725,-0.91574,0.80710,-0.34146,0.88248,-1,0.69401,-1,0.12860,0,0,0
1,0,0.90116,0.16607,0.79299,0.37379,0.72990,0.50515,0.59784,0.72997,0.44303,0.81152,0.24412,0.87493,0.06438,0.85038,-0.12611,0.87396,-0.28739,0.79617,-0.46635,0.65924,-0.57135,0.53805,-0.68159,0.39951,-0.71844,0.25835,-0.72369,0.11218,-0.71475,-0.05525,-0.67699,-0.19904,1
1,0,0.97714,0.19049,0.82683,0.46259,0.71771,0.58732,0.47968,0.84278,0.31409,0.92643,0.10289,0.93945,-0.13254,0.84290,-0.32020,0.91624,-0.52145,0.79525,-0.68274,0.49508,-0.77408,0.33537,-0.85376,0.17849,-0.83314,-0.01358,-0.82366,-0.19321,-0.67289,-0.33662,-0.59943,-0.49700,1
1,0,-1,-1,0,0,0.50814,-0.78502,0.60586,0.32899,-1,-0.41368,0,0,0,0,1,-0.26710,0.36482,-0.63518,0.97068,-1,-1,-1,1,-0.59609,-1,-1,-1,-1,1,-1,0,0,0
1,0,0.74084,0.04974,0.79074,0.02543,0.78575,0.03793,0.66230,0.09948,0.67801,0.31152,0.75934,0.07348,0.74695,0.08442,0.70681,-0.07853,0.63613,0,0.70021,0.11355,0.68183,0.12185,0.67016,0.15445,0.64158,0.13608,0.65707,0.17539,0.59759,0.14697,0.57455,0.15114,1
1,0,1,-1,0,0,0.77941,-0.99265,0.80882,0.55147,-0.41912,-0.94853,0,0,0,0,0.72059,-0.77206,0.73529,-0.60294,0,0,0.18382,-1,-1,-1,-1,-1,1,-1,1,-1,0,0,0
1,0,1,0.01709,0.96215,-0.03142,1,-0.03436,1,-0.05071,0.99026,-0.07092,0.99173,-0.09002,1,-0.15727,1,-0.14257,0.98310,-0.11813,1,-0.18519,1,-0.19272,0.98971,-0.22083,0.96490,-0.20243,0.94599,-0.17123,0.96436,-0.22561,0.87011,-0.23296,1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,-1,0,0,0,0,0,0,0
1,0,0.95704,-0.12095,0.63318,-0.12690,0.96365,-0.18242,0.97026,0.08460,0.92003,-0.01124,0.83543,-0.24719,1,-0.31395,0.99273,-0.21216,0.98678,-0.21018,1,-0.27165,0.93126,-0.39458,1,-0.19233,0.88793,-0.31565,0.81428,-0.23728,0.89095,-0.31857,0.69531,-0.41573,1
1,0,0.28409,-0.31818,0,0,0.68182,-1,0.30682,0.95833,0.64394,0.06439,0.34848,-0.84848,0,0,0.59091,-0.35985,0.45076,-0.80682,0,0,0,0,0.24242,0.17803,1,-0.23864,0.06061,-0.48485,0.16288,-0.70076,0,0,0
1,0,0.94490,-0.49311,1,-0.03692,0.98898,-0.87052,0.90083,0.66942,1,-0.10104,1,-0.12493,1,-0.15017,1,-0.17681,1,-0.20491,1,-0.23452,1,-0.26571,1,-0.29852,1,-0.33304,1,-0.36931,1,-0.40740,1,-0.44739,1
1,0,0,0,0,0,0,0,0,0,0.62195,1,0,0,0,0,0.36585,-0.71951,0.56098,-1,0,0,0,0,0,0,1,0.10976,0,0,0,0,0,0,0
1,0,0.99449,0.00526,0.84082,-0.11313,0.88237,-0.16431,0.99061,-0.06257,0.96484,-0.07496,0.85221,0.02966,0.87161,-0.20848,0.93881,-0.12977,0.98298,-0.08935,0.89876,0.00075,0.87836,-0.05882,0.93368,-0.19872,0.87579,-0.17806,0.94294,-0.16581,0.80253,-0.25741,0.76586,-0.27794,1
1,0,0.10135,0.10811,0,0,0,0,0.54730,0.82432,0.31081,1,0,0,0,0,0.37162,-1,0.33108,-1,0,0,0,0,-0.42568,-1,1,-1,0.55405,-0.23649,0,0,0,0,0
1,0,1,-0.57224,0.99150,-0.73371,0.89518,-0.97450,1,-0.35818,1,-0.23229,0.62890,-0.86402,1,-0.57535,1,-0.79603,0.76771,-0.88952,0.96601,-1,0.70120,-0.74896,0.61946,-0.76904,0.53777,-0.77986,0.81020,-1,1,-1,0.30445,-0.76112,1
1,0,0.65909,-0.62879,0,0,0,0,0.77273,1,1,-0.28030,0,0,0,0,0.62121,-0.22727,0.84091,-1,1,-1,0,0,0,0,1,-0.93939,-0.12879,-0.93182,0,0,0,0,0
1,0,0.86284,0.19310,0.80920,0.41149,0.67203,0.55785,0.54559,0.69962,0.36705,0.81533,0.19617,0.85671,-0.04061,0.86284,-0.17241,0.75785,-0.34100,0.65747,-0.48199,0.56092,-0.60230,0.40996,-0.59234,0.25747,-0.63038,0.08818,-0.57241,-0.07816,-0.54866,-0.19923,-0.42912,-0.31954,1
1,0,0.42000,-0.61000,0,0,1,-1,0.90000,1,0.43000,0.64000,0,0,0,0,0.67000,-0.29000,0.84000,-1,0,0,0,0,0.21000,0.68000,1,0.22000,0,0,0,0,0,0,0
1,0,1,0.23395,0.91404,0.52013,0.78020,0.72144,0.47660,0.84222,0.27639,0.91730,0.09467,0.88248,-0.21980,0.91404,-0.34168,0.75517,-0.51360,0.64527,-0.64527,0.44614,-0.74102,0.29162,-0.70838,0.03591,-0.71731,-0.11943,-0.64962,-0.28183,-0.51251,-0.44505,-0.37432,-0.53319,1
1,0,0.91353,0.81586,-0.72973,1,-0.39466,0.55735,0.05405,0.29730,-0.18599,-0.10241,-0.03158,-0.08970,0.01401,-0.03403,0.01108,-0.00537,0.00342,0.00097,0.00048,0.00075,-0.00003,0.00019,-0.00003,0.00002,-0.00001,0,0,0,0,0,0,0,0
1,0,0.21429,-0.09524,0.33333,0.07143,0.19048,0.19048,0.23810,0.09524,0.40476,0.02381,0.30952,-0.04762,0.30952,-0.04762,0.28571,-0.11905,0.33333,0.04762,0.30952,0,0.21429,-0.11905,0.35714,-0.04762,0.22109,-0.02290,0.19048,0,0.16997,-0.02034,0.14694,-0.01877,1
1,0,1,-0.14754,1,0.04918,0.57377,-0.01639,0.65574,0.01639,0.85246,-0.03279,0.72131,0,0.68852,-0.16393,0.19672,-0.14754,0.65558,-0.17176,0.67213,0.03279,1,-0.29508,0.31148,-0.34426,0.52385,-0.20325,0.32787,-0.03279,0.27869,-0.44262,0.49180,-0.06557,0
1,0,0.98182,0,0.88627,0.03131,0.86249,0.04572,0.80000,0,0.69091,0.04545,0.79343,0.08436,0.77118,0.09579,0.62727,0.25455,0.68182,0.12727,0.70674,0.12608,0.68604,0.13493,0.74545,0.22727,0.64581,0.15088,0.67273,0.02727,0.60715,0.16465,0.58840,0.17077,1
1,0,0.39286,0.52381,-0.78824,0.11342,-0.16628,-0.76378,0.66667,0.01190,0.82143,0.40476,-0.67230,0.30729,-0.34797,-0.63668,0.46429,0.15476,0.54762,0.05952,-0.51830,0.44961,-0.47651,-0.47594,0.32143,0.70238,0.51971,0.38848,0.57143,0.39286,-0.54891,-0.29915,0.25441,-0.55837,0
1,0,0.86889,-0.07111,1,-0.02494,1,-0.06889,0.87778,0.00222,0.83556,-0.06444,1,-0.07287,1,-0.20000,0.86889,0.05333,0.88000,-0.03778,1,-0.11526,1,-0.18667,0.84444,0.03556,1,-0.14162,0.82222,-0.14667,1,-0.15609,1,-0.44222,1
1,0,0.43636,-0.12727,0.58182,-0.14545,0.18182,-0.67273,0.34545,-0.03636,0.29091,-0.05455,0.29091,0.29091,0.36364,-0.41818,0.20000,-0.01818,0.36364,0.05455,0.12727,0.49091,0.61818,0.16364,0.32727,0.16364,0.41098,-0.07027,0.34545,-0.05455,0.12727,-0.36364,0.29091,-0.29091,0
1,0,1,-0.92453,1,0.75472,0.49057,-0.05660,0.62264,0,1,-0.00054,0.45283,0.07547,0.62264,-0.05660,0.98878,-0.00085,0.52830,0,0.52830,0.07547,0.95190,-0.00112,1,0.79245,0.92192,-0.00128,0.94340,-1,1,0.43396,0.43396,-0.11321,1
1,0,0.73810,0.83333,-0.76190,-0.23810,0.33333,-0.14286,0.45238,-0.14286,-0.67285,0.12808,0.33333,0,0.28571,-0.07143,-0.38214,0.51163,0.23810,0.02381,0.45238,0.04762,0.16667,-0.26190,-0.57255,-0.10234,0.24889,-0.51079,1,0,-0.66667,-0.04762,0.26190,0.02381,0
1,0,0.43750,0.04167,0.58333,-0.10417,0.39583,0,0.33333,-0.06250,0.47917,0,0.29167,0.10417,0.54167,0.02083,0.43750,-0.22917,0.35417,-0.22917,0.33333,0.08333,0.25000,0.18750,0.39583,-0.18750,0.44012,-0.10064,0.41667,-0.08333,0.58333,-0.31250,0.33333,-0.06250,1
1,0,1,1,0,0,0,0,0,0,0.47744,-0.89098,-0.51504,0.45489,-0.95489,0.28571,0.64662,1,0,0,0,0,0.62030,0.20301,-1,-1,1,-1,1,1,0,0,0,0,0
1,0,0.95217,0.06595,0.93614,0.13030,0.90996,0.19152,0.84881,-0.49962,0.90023,0.61320,0.77937,0.34328,0.72254,0.37988,0.66145,0.40844,0.95472,0.59862,0.53258,0.44088,0.46773,0.44511,0.40440,0.44199,0.34374,0.43221,0.90330,1,0.23405,0.39620,0.18632,0.37191,1
1,0,0.59840,0.40332,0.82809,0.80521,0.76001,0.70709,0.84010,-0.10984,0.97311,0.07981,0.95824,-0.85727,0.91962,0.88444,0.95452,-0.05206,0.88673,0.18135,0.98484,-0.69594,0.86670,-0.85755,0.28604,-0.30063,1,0.17076,0.62958,0.42677,0.87757,0.81007,0.81979,0.68822,0
1,0,0.95882,0.10129,1,-0.01918,0.98313,0.02555,0.96974,-0.09316,0.98955,-0.02716,0.97980,-0.03096,1,-0.05343,1,-0.05179,0.93840,0.01557,0.97620,-0.09284,0.97889,-0.05318,0.91567,-0.15675,0.95677,-0.06995,0.90978,0.01307,1,-0.10797,0.93144,-0.06888,1
1,0,0,0,-0.33672,0.85388,0,0,0.68869,-1,0.97078,0.31385,-0.26048,-0.59212,-0.30241,0.65565,0.94155,0.16391,0,0,0,0,-0.18043,-1,0,0,1,-1,0,0,0.04447,0.61881,0,0,0
1,0,0.96933,0.00876,1,0.00843,0.98658,-0.00763,0.97868,-0.02844,0.99820,-0.03510,1,-0.01271,1,-0.02581,1,-0.01175,0.98485,0.00025,1,-0.02612,1,-0.04744,0.96019,-0.04527,0.99188,-0.03473,0.97020,-0.02478,1,-0.03855,0.98420,-0.04112,1
1,0,0,0,0.98919,-0.22703,0.18919,-0.05405,0,0,0.93243,0.07297,1,-0.20000,1,0.07027,1,-0.11351,0,0,1,-0.21081,1,-0.41622,0,0,1,-0.17568,0,0,1,-0.25946,0.28919,-0.15676,0
1,0,0.64122,0.01403,0.34146,-0.02439,0.52751,0.03466,0.19512,0.12195,0.43313,0.04755,0.21951,0.04878,0.29268,0,0.36585,0,0.31707,0.07317,0.26829,0.12195,0.23698,0.05813,0.21951,0.09756,0.19304,0.05641,0.17410,0.05504,0.19512,0,0.17073,0.07317,1
1,0,1,1,1,-1,0,0,0,0,1,1,1,-1,1,1,1,-1,0,0,0,0,1,-0.27778,0,0,1,-1,1,1,1,-1,0,0,0
1,0,0.34694,0.20408,0.46939,0.24490,0.40816,0.20408,0.46939,0.44898,0.30612,0.59184,0.12245,0.55102,0,0.51020,-0.06122,0.55102,-0.20408,0.55102,-0.28571,0.44898,-0.28571,0.32653,-0.61224,0.22449,-0.46579,0.14895,-0.59184,0.18367,-0.34694,0,-0.26531,-0.24490,1
1,0,0,0,1,-1,0,0,0,0,1,1,1,-0.25342,1,0.23288,1,-1,0,0,0,0,1,1,0,0,1,-1,0,0,1,-1,0,0,0
1,0,0.89706,0.38235,0.91176,0.37500,0.74265,0.67647,0.45588,0.77941,0.19118,0.88971,-0.02206,0.86029,-0.20588,0.82353,-0.37500,0.67647,-0.5,0.47794,-0.73529,0.38235,-0.86029,0.08824,-0.74265,-0.12500,-0.67925,-0.24131,-0.55147,-0.42647,-0.44118,-0.50735,-0.28676,-0.56618,1
1,0,-1,0.28105,0.22222,0.15033,-0.75693,-0.70984,-0.30719,0.71242,-1,1,-0.81699,0.33987,-0.79085,-0.02614,-0.98039,-0.83007,-0.60131,-0.54248,-0.04575,-0.83007,0.94118,-0.94118,-1,-0.43137,0.74385,0.09176,-1,0.05229,0.18301,0.02614,-0.40201,-0.48241,0
1,0,0.26667,-0.10000,0.53333,0,0.33333,-0.13333,0.36667,0.11667,0.56667,0.01667,0.71667,0.08333,0.70000,-0.06667,0.53333,0.20000,0.41667,-0.01667,0.31667,0.20000,0.70000,0,0.25000,0.13333,0.46214,0.05439,0.40000,0.03333,0.46667,0.03333,0.41667,-0.05000,1
1,0,-0.26667,0.40000,-0.27303,0.12159,-0.17778,-0.04444,0.06192,-0.06879,0.04461,0.02575,-0.00885,0.02726,-0.01586,-0.00166,-0.00093,-0.00883,0.00470,-0.00153,0.00138,0.00238,-0.00114,0.00102,-0.00069,-0.00050,0.00019,-0.00043,0.00026,0.00005,0,0.00015,-0.00008,0.00002,0
1,0,1,-0.37838,0.64865,0.29730,0.64865,-0.24324,0.86486,0.18919,1,-0.27027,0.51351,0,0.62162,-0.05405,0.32432,-0.21622,0.71833,-0.17666,0.62162,0.05405,0.75676,0.13514,0.35135,-0.29730,0.61031,-0.22163,0.58478,-0.23027,0.72973,-0.59459,0.51351,-0.24324,1
1,0,0.94531,-0.03516,-1,-0.33203,-1,-0.01563,0.97266,0.01172,0.93359,-0.01953,-1,0.16406,-1,-0.00391,0.95313,-0.03516,0.92188,-0.02734,-0.99219,0.11719,-0.93359,0.34766,0.95703,-0.00391,0.82041,0.13758,0.90234,-0.06641,-1,-0.18750,-1,-0.34375,0
1,0,0.95202,0.02254,0.93757,-0.01272,0.93526,0.01214,0.96705,-0.01734,0.96936,0.00520,0.95665,-0.03064,0.95260,-0.00405,0.99480,-0.02659,0.99769,0.01792,0.93584,-0.04971,0.93815,-0.02370,0.97052,-0.04451,0.96215,-0.01647,0.97399,0.01908,0.95434,-0.03410,0.95838,0.00809,1
1,0,1,-0.05529,1,-1,0.5,-0.11111,0.36111,-0.22222,1,-0.25712,0.16667,-0.11111,1,-0.34660,1,-0.38853,1,-0.42862,0,-0.25000,1,-0.50333,1,-0.27778,1,-0.57092,1,-0.27778,1,-0.63156,1,-0.65935,0
1,0,0.31034,-0.10345,0.24138,-0.10345,0.20690,-0.06897,0.07405,-0.05431,0.03649,-0.03689,0.01707,-0.02383,0.00741,-0.01482,0.00281,-0.00893,0.00078,-0.00523,-0.00003,-0.00299,-0.00028,-0.00166,-0.00031,-0.00090,-0.00025,-0.00048,-0.00018,-0.00024,-0.00012,-0.00012,-0.00008,-0.00006,1
1,0,0.62745,-0.07843,0.72549,0,0.60784,-0.07843,0.62745,-0.11765,0.68627,-0.11765,0.66667,-0.13725,0.64706,-0.09804,0.54902,-0.11765,0.54902,-0.21569,0.58824,-0.19608,0.66667,-0.23529,0.45098,-0.25490,0.52409,-0.24668,0.56863,-0.31373,0.43137,-0.21569,0.47059,-0.27451,0
1,0,0.25000,0.16667,0.46667,0.26667,0.19036,0.23966,0.07766,0.19939,0.01070,0.14922,-0.02367,0.10188,-0.03685,0.06317,-0.03766,0.03458,-0.03230,0.01532,-0.02474,0.00357,-0.01726,-0.00273,-0.01097,-0.00539,-0.00621,-0.00586,-0.00294,-0.00520,-0.00089,-0.00408,0.00025,-0.00291,1
1,0,-0.65625,0.15625,0.06250,0,0,0.06250,0.62500,0.06250,0.18750,0,-0.03125,0.09375,0.06250,0,0.15625,-0.15625,0.43750,-0.37500,0,-0.09375,0,0,0.03125,-0.46875,0.03125,0,-0.71875,0.03125,-0.03125,0,0,0.09375,0
1,0,1,-0.01081,1,-0.02703,1,-0.06486,0.95135,-0.01622,0.98919,-0.03243,0.98919,0.08649,1,-0.06486,0.95135,0.09189,0.97838,-0.00541,1,0.06486,1,0.04324,0.97838,0.09189,0.98556,0.01251,1,-0.03243,1,0.02703,1,-0.07027,1
1,0,0.85271,0.05426,1,0.08069,1,1,0.91473,-0.00775,0.83721,0.03876,1,0.27153,1,1,0.81395,0.04651,0.90698,0.11628,1,0.50670,1,-1,0.80620,0.03876,1,0.71613,0.84496,0.06977,1,0.87317,1,1,0
1,0,0.90374,-0.01604,1,0.08021,1,0.01604,0.93048,0.00535,0.93583,-0.01604,1,0,1,0.06417,1,0.04813,0.91444,0.04278,0.96791,0.02139,0.98930,-0.01604,0.96257,0.05348,0.96974,0.04452,0.87701,0.01070,1,0.09091,0.97861,0.06417,1
1,0,-0.20500,0.28750,0.23000,0.10000,0.28250,0.31750,0.32250,0.35000,0.36285,-0.34617,0.09250,0.27500,-0.09500,0.21000,-0.08750,0.23500,-0.34187,0.31408,-0.48000,-0.08000,0.29908,0.33176,-0.58000,-0.24000,0.32190,-0.28475,-0.47000,0.18500,-0.27104,-0.31228,0.40445,0.03050,0
1,0,0.60000,0.03333,0.63333,0.06667,0.70000,0.06667,0.70000,0,0.63333,0,0.80000,0,0.73333,0,0.70000,0.10000,0.66667,0.10000,0.73333,-0.03333,0.76667,0,0.63333,0.13333,0.65932,0.10168,0.60000,0.13333,0.60000,0.16667,0.63333,0.16667,1
1,0,0.05866,-0.00838,0.06704,0.00838,0,-0.01117,0.00559,-0.03911,0.01676,-0.07542,-0.00559,0.05307,0.06425,-0.03352,0,0.09497,-0.06425,0.07542,-0.04749,0.02514,0.02793,-0.00559,0.00838,0.00559,0.10335,-0.00838,0.03073,-0.00279,0.04469,0,0.04749,-0.03352,0
1,0,0.94653,0.28713,0.72554,0.67248,0.47564,0.82455,0.01267,0.89109,-0.24871,0.84475,-0.47644,0.56079,-0.75881,0.41743,-0.66455,0.07208,-0.65426,-0.19525,-0.52475,-0.44000,-0.30851,-0.55089,-0.04119,-0.64792,0.16085,-0.56420,0.36752,-0.41901,0.46059,-0.22535,0.50376,-0.05980,1
1,0,0.05460,0.01437,-0.02586,0.04598,0.01437,0.04598,-0.07759,0.00862,0.01724,-0.06609,-0.03736,0.04310,-0.08333,-0.04598,-0.09483,0.08046,-0.04023,0.05172,0.02011,0.02299,-0.03736,-0.01149,0.03161,-0.00862,0.00862,0.01724,0.02586,0.01149,0.02586,0.01149,-0.04598,-0.00575,0
1,0,0.72414,-0.01084,0.79704,0.01084,0.80000,0.00197,0.79015,0.01084,0.78424,-0.00985,0.83350,0.03251,0.85123,0.01675,0.80099,-0.00788,0.79113,-0.02956,0.75961,0.03350,0.74778,0.05517,0.72611,-0.01478,0.78041,0.00612,0.74089,-0.05025,0.82956,0.02956,0.79015,0.00788,1
1,0,0.03852,0.02568,0.00428,0,0.01997,-0.01997,0.02140,-0.04993,-0.04850,-0.01284,0.01427,-0.02282,0,-0.03281,-0.04708,-0.02853,-0.01712,0.03566,0.02140,0.00428,0.05136,-0.02282,0.05136,0.01854,0.03994,0.01569,0.01997,0.00713,-0.02568,-0.01854,-0.01427,0.01997,0
1,0,0.47090,0.22751,0.42328,0.33598,0.25661,0.47619,0.01852,0.49471,-0.02116,0.53968,-0.34127,0.31217,-0.41270,0.32540,-0.51587,0.06878,-0.5,-0.11640,-0.14815,-0.14550,-0.14815,-0.38095,-0.23280,0.00265,0.03574,-0.31739,0.15873,-0.21693,0.24868,-0.24339,0.26720,0.04233,1
1,0,0.08696,0.00686,0.13959,-0.04119,0.10526,-0.08238,0.12586,-0.06178,0.23341,-0.01144,0.12357,0.07780,0.14645,-0.13501,0.29062,-0.04805,0.18993,0.07323,0.11670,0,0.11213,-0.00229,0.15103,-0.10297,0.08467,0.01373,0.11213,-0.06636,0.09611,-0.07323,0.11670,-0.06865,0
1,0,0.94333,0.38574,0.48263,0.64534,0.21572,0.77514,-0.55941,0.64899,-0.73675,0.42048,-0.76051,0,-0.62706,-0.31079,-0.38391,-0.62157,-0.12797,-0.69287,0.49909,-0.63620,0.71481,-0.37660,0.73857,-0.05484,0.60098,0.30384,0.45521,0.60512,0.02742,0.54479,-0.21572,0.50457,1
1,0,0.01975,0.00705,0.04090,-0.00846,0.02116,0.01128,0.01128,0.04372,0.00282,0.00141,0.01975,-0.03103,-0.01975,0.06065,-0.04090,0.02680,-0.02398,-0.00423,0.04372,-0.02539,0.01834,0,0,-0.01269,0.01834,-0.01128,0.00564,-0.01551,-0.01693,-0.02398,0.00705,0,0
1,0,0.85736,0.00075,0.81927,-0.05676,0.77521,-0.04182,0.84317,0.09037,0.86258,0.11949,0.88051,-0.06124,0.78342,0.03510,0.83719,-0.06796,0.83570,-0.14190,0.88125,0.01195,0.90515,0.02240,0.79686,-0.01942,0.82383,-0.03678,0.88125,-0.06423,0.73936,-0.01942,0.79089,-0.09186,1
1,0,1,-1,1,1,-1,1,1,-1,1,-1,-1,-1,-1,1,1,1,1,1,-1,1,1,-1,1,-1,1,1,1,1,-1,1,-1,1,0
1,0,0.85209,0.39252,0.38887,0.76432,0.08858,0.98903,-0.42625,0.88744,-0.76229,0.49980,-0.93092,0.10768,-0.85900,-0.31044,-0.66030,-0.55262,-0.19260,-0.86063,0.28444,-0.80496,0.64649,-0.35230,0.77814,-0.23324,0.71698,0.21343,0.37830,0.58310,0.19667,0.66315,-0.11215,0.64933,1
1,0,1,1,1,0.51250,0.62500,-1,1,1,0.02500,0.03125,1,1,0,0,1,-1,1,1,1,1,0.31250,1,1,1,1,1,1,1,-0.94375,1,0,0,0
1,0,1,0.54902,0.62745,1,0.01961,1,-0.49020,0.92157,-0.82353,0.58824,-1,0.11765,-0.96078,-0.33333,-0.64706,-0.68627,-0.23529,-0.86275,0.35294,-1,0.74510,-0.72549,0.92157,-0.21569,0.92874,0.21876,0.72549,0.56863,0.23529,0.90196,-0.11765,0.90196,1
1,0,0,0,-1,-1,-1,1,0,0,-1,1,1,1,1,-1,0,0,0,0,-1,-1,-1,1,1,0.43750,1,-1,0,0,-1,-1,-1,1,0
1,0,0.44444,0.44444,0.53695,0.90763,-0.22222,1,-0.33333,0.88889,-1,0.33333,-1,-0.11111,-1,-0.22222,-0.66667,-0.77778,0.55556,-1,-0.22222,-0.77778,0.77778,-0.22222,0.33333,0,0.92120,0.45019,0.57454,0.84353,0.22222,1,-0.55556,1,1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0
1,0,1,0,1,0,0.5,0.50000,0.75000,0,0.91201,0.12094,0.89067,0.14210,0.86922,0.16228,0.75000,0.25000,0.75000,0.5,0.75000,0,1,-0.25000,0.5,0.50000,0.73944,0.26388,0.75000,0.25000,0.69635,0.29074,0.67493,0.30293,1
0,0,-1,1,1,1,0,0,1,-1,1,-1,1,-1,-1,-1,0,0,-1,-1,0,0,0,0,-1,-1,1,-1,1,1,-1,-1,0,0,0
1,0,1,0,1,0,0.66667,0.11111,1,-0.11111,0.88889,-0.11111,1,-0.22222,0.77778,0,0.77778,0,1,-0.11111,0.77778,-0.11111,0.66667,-0.11111,0.66667,0,0.90347,-0.05352,1,0.11111,0.88889,-0.11111,1,0,1
0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,0,0,1,0.75000,0,0,0,0,-1,1,0,0,1,-1,-1,-1,1,1,0,0,0
1,0,1,0.45455,1,-0.45455,1,0.09091,1,-0.09091,1,0,1,-0.27273,1,-0.18182,1,0.09091,1,0,1,-0.36364,1,0.09091,1,-0.09091,1,-0.04914,1,0.45455,1,-0.27273,1,-0.18182,1
1,0,0.62121,-0.63636,0,0,0,0,0.34470,0.28788,0.42803,0.39394,-0.07576,0.51894,0.36364,0.31439,-0.53788,0.32955,0.12121,-0.14773,0.01894,-0.53409,-0.57576,0.17803,0.29167,-0.27273,0.25758,-0.57576,0.43182,0.24242,0.18182,-0.02273,0.17045,-0.41667,0
1,0,1,0.11765,1,0.23529,1,0.41176,1,0.05882,1,0.23529,1,0.11765,1,0.47059,1,-0.05882,1,-0.11765,1,0.35294,1,0.41176,1,-0.11765,1,0.20225,1,0.05882,1,0.35294,1,0.23529,1
1,0,0,0,-1,-0.62766,1,0.51064,0.07979,-0.23404,-1,-0.36170,0.12766,-0.59043,1,-1,0,0,0.82979,-0.07979,-0.25000,1,0.17021,-0.70745,0,0,-0.19149,-0.46809,-0.22340,-0.48936,0.74468,0.90426,-0.67553,0.45745,0
1,0,0.91667,0.29167,0.83333,-0.16667,0.70833,0.25000,0.87500,-0.08333,0.91667,0.04167,0.83333,0.12500,0.70833,0,0.87500,0.04167,1,0.08333,0.66667,-0.08333,0.75000,0.16667,0.83333,-0.12500,0.83796,0.05503,1,0.20833,0.70833,0,0.70833,0.04167,1
1,0,0.18590,-0.16667,0,0,0,0,0,0,0,0,0.11538,-0.19071,0,0,0,0,0,0,0,0,-0.05128,-0.06571,0.07853,0.08974,0.17308,-0.10897,0.12500,0.09615,0.02564,-0.04808,0.16827,0.19551,0
1,0,1,-0.08183,1,-0.11326,0.99246,-0.29802,1,-0.33075,0.96662,-0.34281,0.85788,-0.47265,0.91904,-0.48170,0.73084,-0.65224,0.68131,-0.63544,0.82450,-0.78316,0.58829,-0.74785,0.67033,-0.96296,0.48757,-0.85669,0.37941,-0.83893,0.24117,-0.88846,0.29221,-0.89621,1
1,0,1,1,-1,1,-1,-0.82456,0.34649,0.21053,0.46053,0.07018,0.22807,0.05702,0.35088,0.34649,0.72807,-0.03947,0.22807,0.53070,0,0,-0.29825,-0.16228,1,-0.66667,1,-1,1,-0.24561,0.35088,0.20175,0.82895,0.07895,0
1,0,1,0.24077,0.99815,0.00369,0.80244,-0.30133,0.89919,-0.23486,0.70643,-0.24077,0.73855,-0.30539,0.71492,-0.36078,0.47194,-0.61189,0.40473,-0.55059,0.61041,-0.39328,0.53176,-0.32681,0.23966,-0.52142,0.29208,-0.48390,0.12777,-0.39143,0.15657,-0.51329,0.18353,-0.46603,1
0,0,-1,1,1,-1,0,0,0,0,1,-1,1,1,0,0,1,-1,0,0,0,0,1,1,-1,1,1,-1,-1,1,-1,-1,0,0,0
1,0,0.92247,-0.19448,0.96419,-0.17674,0.87024,-0.22602,0.81702,-0.27070,0.79271,-0.28909,0.70302,-0.49639,0.63338,-0.49967,0.37254,-0.70729,0.27070,-0.72109,0.40506,-0.54172,0.33509,-0.59691,0.14750,-0.63601,0.09312,-0.59589,-0.07162,-0.54928,-0.01840,-0.54074,-0.07457,-0.47898,1
1,0,-1,-1,-0.50694,1,1,-1,1,0.53819,0,0,0.23958,-1,1,1,0,0,1,1,1,1,0,0,-0.71528,1,0.33333,-1,1,-1,0.69792,-1,0.47569,1,0
1,0,0.84177,0.43460,0.5,0.76160,0.09916,0.93460,-0.37764,0.88186,-0.72363,0.61181,-0.93882,0.19409,-0.86709,-0.25527,-0.62869,-0.65612,-0.25105,-0.85654,0.16245,-0.86498,0.51477,-0.66878,0.74895,-0.28903,0.77937,0.07933,0.64135,0.42827,0.31435,0.62447,-0.00422,0.69409,1
1,0,1,1,0,0,1,-1,-1,-1,1,1,1,-1,0,0,1,-1,1,1,0,0,1,-1,-1,-1,1,1,-1,1,-1,1,0,0,0
1,0,1,0.63548,1,1,0.77123,1,-0.33333,1,-1,1,0,1,-1,1,-1,0,-1,-0.66667,-1,-0.92536,-1,-0.33333,-0.33333,-1,0.19235,-1,1,-1,0,-1,1,-0.66667,1
0,0,-1,1,-1,-1,0,0,-1,1,1,-1,-1,-1,-1,1,0,0,-1,-1,-1,1,0,0,1,-1,1,1,1,-1,1,1,0,0,0
1,0,1,0.06843,1,0.14211,1,0.22108,1,-0.12500,1,0.39495,1,0.48981,1,0.58986,-0.37500,1,1,0,1,0.92001,1,1,1,1,1,1,1,0.25000,1,1,1,1,1
0,0,-1,-1,0,0,0,0,0,0,0,0,0,0,1,-1,0,0,-1,-1,0,0,1,1,1,-1,1,-1,0,0,0,0,0,0,0
1,0,0.64947,-0.07896,0.58264,-0.14380,-0.13129,-0.21384,0.29796,0.04403,0.38096,-0.26339,0.28931,-0.31997,0.03459,-0.18947,0.20269,-0.29441,0.15196,-0.29052,0.09513,-0.31525,0.06556,-0.26795,0.03004,-0.25124,-0.00046,-0.23210,-0.02612,-0.21129,-0.04717,-0.18950,0.01336,-0.27201,1
1,0,0,0,0,0,0,0,0,0,1,-0.33333,0.16667,0.26042,0,0,0,0,0,0,-0.19792,-0.21875,-0.16667,0.90625,-1,0.5,0.04167,0.75000,-0.22917,-1,-0.12500,-0.27083,-0.19792,-0.93750,0
1,0,1,0.05149,0.99363,0.10123,0.96142,0.14756,0.95513,-0.26496,0.66026,0.54701,0.80426,0.25283,0.73781,0.27380,0.66775,0.28714,0.59615,0.29304,0.52494,0.29200,0.45582,0.28476,0.39023,0.27226,0.32930,0.25553,0.27381,0.23568,0.22427,0.21378,0.18086,0.19083,1
1,0,1,-0.09524,-1,-1,-1,-1,1,0.31746,0.81349,0.76190,-1,-1,-1,1,0.47364,1,1,1,0.68839,-1,-1,-1,0.82937,0.36508,1,1,1,0.50794,-1,-0.32540,-1,0.72831,0
1,0,0.93669,-0.00190,0.60761,0.43204,0.92314,-0.40129,0.93123,0.16828,0.96197,0.09061,0.99676,0.08172,0.91586,0.05097,0.84628,-0.25324,0.87379,-0.14482,0.84871,0.26133,0.75081,-0.03641,0.84547,-0.02589,0.87293,-0.02302,0.98544,0.09385,0.78317,-0.10194,0.85841,-0.14725,1
1,0,1,-1,1,1,1,1,1,-0.5,1,1,1,1,1,1,0,0,1,1,1,1,1,-1,1,1,1,0.62500,1,-0.75000,-0.75000,1,1,1,0
1,0,1,0.23058,1,-0.78509,1,-0.10401,1,0.15414,1,0.27820,0.98120,-0.06861,1,0.06610,0.95802,-0.18954,0.83584,-0.15633,0.97400,0.03728,0.99624,0.09242,1,-0.01253,0.96238,-0.04597,0.91165,0.03885,1,-0.13722,0.96523,-0.11717,1
1,0,0.36876,-1,-1,-1,-0.07661,1,1,0.95041,0.74597,-0.38710,-1,-0.79313,-0.09677,1,0.48684,0.46502,0.31755,-0.27461,-0.14343,-0.20188,-0.11976,0.06895,0.03021,0.06639,0.03443,-0.01186,-0.00403,-0.01672,-0.00761,0.00108,0.00015,0.00325,0
1,0,0.79847,0.38265,0.80804,-0.16964,1,-0.07653,0.98151,-0.07398,0.70217,0.20663,0.99745,0.02105,0.98214,0.02487,1,-0.13074,0.95663,0.07717,1,0.00191,0.90306,0.30804,1,-0.14541,1,-0.00394,0.75638,0.07908,1,-0.18750,1,-0.05740,1
0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,-1,0,0,1,1,1,-1,1,1,1,0,1,1,1,-1,0,0,0
1,0,1,-0.28428,1,-0.25346,0.94623,-0.35094,1,-0.30566,0.92736,-0.49057,0.90818,-0.44119,0.75723,-0.58899,0.69748,-0.58019,0.59623,-0.57579,0.68459,-0.70975,0.54465,-0.87327,0.49214,-0.73333,0.35504,-0.76054,0.26352,-0.78239,0.16604,-0.73145,0.13994,-0.70000,1
1,0,0,0,0,0,0,0,-0.85000,-1,0,0,1,-1,0,0,-1,-1,-1,-1,1,-1,-0.60000,-1,1,1,-1,-0.20000,1,-1,0,1,0,0,0
1,0,1,0.09091,0.95455,-0.09091,0.77273,0,1,0,0.95455,0,1,0.04545,0.90909,-0.04545,1,0,1,0,0.86364,0.09091,0.77273,0.09091,0.90909,0.04545,0.91541,0.02897,0.95455,0.09091,0.86364,-0.09091,0.86364,0.04545,1
0,0,0,0,-1,1,1,1,-1,-1,0,0,-1,-1,-1,-0.31250,-1,-1,1,-1,1,-1,0,0,1,-1,-1,-1,0,0,1,-1,0,0,0
1,0,0.91176,-0.08824,0.97059,0.17647,0.82353,0.08824,0.91176,-0.02941,0.97059,-0.17647,0.97059,0.14706,0.94118,0.02941,1,0,1,0,0.76471,0.11765,0.88235,0.02941,0.85294,0.02941,0.92663,0.02600,0.94118,-0.11765,0.97059,0.05882,0.91176,0.05882,1
1,0,-1,1,-1,0.15244,0.28354,1,-1,1,-1,-1,1,1,-1,-0.23476,0.28301,-1,1,1,-0.31402,-1,-1,-1,1,-1,-1,-0.03578,1,-1,-1,-0.32317,0.14939,1,0
1,0,0.47368,-0.10526,0.83781,0.01756,0.83155,0.02615,0.68421,-0.05263,0.68421,0,0.79856,0.05028,0.78315,0.05756,0.84211,0.47368,1,0.05263,0.72550,0.07631,0.70301,0.08141,0.42105,0.21053,0.65419,0.08968,0.52632,-0.21053,0.60150,0.09534,0.57418,0.09719,1
1,0,-0.00641,-0.5,0,0,-0.01923,1,0,0,0,0,0,0,0,0,0,0,0.31410,0.92949,-0.35256,0.74359,-0.34615,-0.80769,0,0,-0.61538,-0.51282,0,0,0,0,0,0,0
1,0,1,0.45455,1,0.54545,0.81818,0.63636,1,-0.09091,1,0,0.81818,-0.45455,0.63636,0.27273,1,-0.63636,1,-0.27273,0.90909,-0.45455,1,0.07750,1,-0.09091,1,0.08867,1,0.36364,1,0.63636,0.72727,0.27273,1
0,0,-1,-1,1,-1,-1,1,0,0,1,-1,1,-1,0,0,0,0,0,0,-1,1,1,-1,-1,1,1,1,0,0,1,0.5,0,0,0
1,0,0.45455,0.09091,0.63636,0.09091,0.27273,0.18182,0.63636,0,0.36364,-0.09091,0.45455,-0.09091,0.48612,-0.01343,0.63636,-0.18182,0.45455,0,0.36364,-0.09091,0.27273,0.18182,0.36364,-0.09091,0.34442,-0.01768,0.27273,0,0.36364,0,0.28985,-0.01832,1
1,0,-1,-0.59677,0,0,-1,0.64516,-0.87097,1,0,0,0,0,0,0,0,0,0,0,-1,-1,0,0,0.29839,0.23387,1,0.51613,0,0,0,0,0,0,0
1,0,1,0.14286,1,0.71429,1,0.71429,1,-0.14286,0.85714,-0.14286,1,0.02534,1,0,0.42857,-0.14286,1,0.03617,1,-0.28571,1,0,0.28571,-0.28571,1,0.04891,1,0.05182,1,0.57143,1,0,1
0,0,1,1,1,-1,1,1,1,1,1,1,1,-1,1,1,1,-1,1,-1,1,1,1,1,1,-1,1,1,1,1,1,1,1,1,0
1,0,0.87032,0.46972,0.53945,0.82161,0.10380,0.95275,-0.38033,0.87916,-0.73939,0.58226,-0.92099,0.16731,-0.82417,-0.24942,-0.59383,-0.63342,-0.24012,-0.82881,0.18823,-0.78699,0.51557,-0.57430,0.69274,-0.24843,0.69097,0.10484,0.52798,0.39762,0.25974,0.56573,-0.06739,0.57552,1
0,0,1,-1,1,1,1,-1,1,1,1,-1,1,-1,1,-1,1,1,1,1,1,1,1,-1,1,1,1,1,1,1,1,1,1,-1,0
1,0,0.92657,0.04174,0.89266,0.15766,0.86098,0.19791,0.83675,0.36526,0.80619,0.40198,0.76221,0.40552,0.66586,0.48360,0.60101,0.51752,0.53392,0.52180,0.48435,0.54212,0.42546,0.55684,0.33340,0.55274,0.26978,0.54214,0.22307,0.53448,0.14312,0.49124,0.11573,0.46571,1
0,0,1,1,1,-1,1,-1,1,1,0,0,1,-1,0,0,0,0,0,0,-1,1,1,1,0,0,1,1,0,0,-1,-1,0,0,0
1,0,0.93537,0.13645,0.93716,0.25359,0.85705,0.38779,0.79039,0.47127,0.72352,0.59942,0.65260,0.75000,0.50830,0.73586,0.41629,0.82742,0.25539,0.85952,0.13712,0.85615,0.00494,0.88869,-0.07361,0.79780,-0.20995,0.78004,-0.33169,0.71454,-0.38532,0.64363,-0.47419,0.55835,1
0,0,1,-1,-1,1,-1,1,1,1,1,1,-1,-1,-1,-1,1,1,1,-1,-1,-1,-1,-1,1,0,1,-1,1,-1,-1,1,-1,1,0
1,0,0.80627,0.13069,0.73061,0.24323,0.64615,0.19038,0.36923,0.45577,0.44793,0.46439,0.25000,0.57308,0.25192,0.37115,0.15215,0.51877,-0.09808,0.57500,-0.03462,0.42885,-0.08856,0.44424,-0.14943,0.40006,-0.19940,0.34976,-0.23832,0.29541,-0.26634,0.23896,-0.23846,0.31154,1
0,0,1,-1,1,1,1,-1,1,1,1,-1,1,1,1,-1,1,-1,1,1,1,1,1,-1,1,-1,1,-1,1,1,1,-1,1,1,0
1,0,0.97467,0.13082,0.94120,0.20036,0.88783,0.32248,0.89009,0.32711,0.85550,0.45217,0.72298,0.52284,0.69946,0.58820,0.58548,0.66893,0.48869,0.70398,0.44245,0.68159,0.35289,0.75622,0.26832,0.76210,0.16813,0.78541,0.07497,0.80439,-0.02962,0.77702,-0.10289,0.74242,1
0,0,0,0,1,1,0,0,1,1,0,0,1,-1,0,0,0,0,0,0,0,0,0,0,0,0,1,-1,0,0,-1,1,0,0,0
1,0,0.92308,0.15451,0.86399,0.29757,0.72582,0.36790,0.70588,0.56830,0.57449,0.62719,0.43270,0.74676,0.31705,0.67697,0.19128,0.76818,0.04686,0.76171,-0.12064,0.76969,-0.18479,0.71327,-0.29291,0.65708,-0.38798,0.58553,-0.46799,0.50131,-0.53146,0.40732,-0.56231,0.35095,1
0,0,0,0,1,1,1,1,0,0,0,0,-1,-1,0,0,-1,-1,0,0,0,0,1,1,0,0,1,1,0,0,-1,1,0,0,0
1,0,0.88804,0.38138,0.65926,0.69431,0.29148,0.87892,-0.06726,0.90135,-0.39597,0.80441,-0.64574,0.56502,-0.82960,0.26906,-0.78940,-0.08205,-0.62780,-0.30942,-0.46637,-0.55605,-0.16449,-0.64338,0.09562,-0.61055,0.30406,-0.48392,0.43227,-0.29838,0.47029,-0.09461,0.42152,0.12556,1
0,0,1,-1,1,1,1,1,1,1,1,1,1,-1,1,1,1,1,1,-1,1,-1,1,-1,1,-1,1,1,1,-1,1,1,1,1,0
1,0,0.73523,-0.38293,0.80151,0.10278,0.78826,0.15266,0.55580,0.05252,1,0.21225,0.71947,0.28954,0.68798,0.32925,0.49672,0.17287,0.64333,-0.02845,0.57399,0.42528,0.53120,0.44872,0.94530,0.57549,0.44174,0.48200,0.12473,1,0.35070,0.49721,0.30588,0.49831,1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0
1,0,0.94649,0.00892,0.97287,-0.00260,0.98922,0.00372,0.95801,0.01598,0.94054,0.03530,0.97213,0.04719,0.98625,0.01858,0.94277,0.07135,0.98551,-0.00706,0.97770,0.04980,0.96358,0.07098,0.93274,0.08101,0.95243,0.04356,0.97473,0.00818,0.97845,0.07061,1,-0.00260,1
0,0,1,1,-1,-1,-1,-1,0,0,0,0,-1,-1,0,0,0,0,0,0,-1,1,1,1,0,0,1,-1,0,0,-1,-1,-1,-1,0
1,0,0.50466,-0.16900,0.71442,0.01513,0.71063,0.02258,0.68065,0.01282,0.34615,0.05594,0.69050,0.04393,0.68101,0.05058,0.67023,0.05692,0.63403,-0.04662,0.64503,0.06856,0.63077,0.07381,0.84033,0.18065,0.59935,0.08304,0.38228,0.06760,0.56466,0.09046,0.54632,0.09346,1
1,0,0.68729,1,0.91973,-0.76087,0.81773,0.04348,0.76087,0.10702,0.86789,0.73746,0.70067,0.18227,0.75920,0.13712,0.93478,-0.25084,0.70736,0.18729,0.64883,0.24582,0.60201,0.77425,1,-0.53846,0.89262,0.22216,0.71070,0.53846,1,-0.06522,0.56522,0.23913,0
1,0,0.76296,-0.07778,1,-0.29630,1,-0.85741,0.80000,0.06111,0.45556,-0.42778,1,-0.12581,1,-0.83519,0.49259,0.01852,0.82222,-0.05926,0.98215,-0.19938,1,0.22037,0.69630,-0.26481,0.92148,-0.24549,0.78889,0.02037,0.87492,-0.27105,1,-0.57037,1
1,0,0.38521,0.15564,0.41245,0.07393,0.26459,0.24125,0.23346,0.13230,0.19455,0.25292,0.24514,0.36965,0.08949,0.22957,-0.03891,0.36965,0.05058,0.24903,0.24903,0.09728,0.07782,0.29961,-0.02494,0.28482,-0.06024,0.26256,-0.14786,0.14786,-0.09339,0.31128,-0.19066,0.28794,0
1,0,0.57540,-0.03175,0.75198,-0.05357,0.61508,-0.01190,0.53968,0.03373,0.61706,0.09921,0.59127,-0.02381,0.62698,0.01190,0.70833,0.02579,0.60317,0.01587,0.47817,-0.02778,0.59127,0.03770,0.5,0.03968,0.61291,-0.01237,0.61706,-0.13492,0.68849,-0.01389,0.62500,-0.03175,1
1,0,0.06404,-0.15271,-0.04433,0.05911,0.08374,-0.02463,-0.01478,0.18719,0.06404,0,0.12315,-0.09852,0.05911,0,0.01970,-0.02956,-0.12808,-0.20690,0.06897,0.01478,0.06897,0.02956,0.07882,0.16256,0.28079,-0.04926,-0.05911,-0.09360,0.04433,0.05419,0.07389,-0.10837,0
1,0,0.61857,0.10850,0.70694,-0.06935,0.70358,0.01678,0.74273,0.00224,0.71029,0.15772,0.71588,-0.00224,0.79754,0.06600,0.83669,-0.16555,0.68680,-0.09060,0.62528,-0.01342,0.60962,0.11745,0.71253,-0.09508,0.69845,-0.01673,0.63311,0.04810,0.78859,-0.05145,0.65213,-0.04698,1
1,0,0.25316,0.35949,0,0,-0.29620,-1,0,0,0.07595,-0.07342,0,0,0,0,0,0,0,0,0.00759,0.68101,-0.20000,0.33671,-0.10380,0.35696,0.05570,-1,0,0,0.06329,-1,0,0,0
1,0,0.88103,-0.00857,0.89818,-0.02465,0.94105,-0.01822,0.89175,-0.12755,0.82208,-0.10932,0.88853,0.01179,0.90782,-0.13719,0.87138,-0.06109,0.90782,-0.02358,0.87996,-0.14577,0.82851,-0.12433,0.90139,-0.19507,0.88245,-0.14903,0.84352,-0.12862,0.88424,-0.18542,0.91747,-0.16827,1
1,0,0.42708,-0.5,0,0,0,0,0.46458,0.51042,0.58958,0.02083,0,0,0,0,0.16458,-0.45417,0.59167,-0.18333,0,0,0,0,0.98750,-0.40833,-1,-1,-0.27917,-0.75625,0,0,0,0,0
1,0,0.88853,0.01631,0.92007,0.01305,0.92442,0.01359,0.89179,-0.10223,0.90103,-0.08428,0.93040,-0.01033,0.93094,-0.08918,0.86025,-0.05057,0.89451,-0.04024,0.88418,-0.12126,0.88907,-0.11909,0.82980,-0.14138,0.86453,-0.11808,0.85536,-0.13051,0.83524,-0.12452,0.86786,-0.12235,1
1,0,0,0,1,0.12889,0.88444,-0.02000,0,0,1,-0.42444,1,0.19556,1,-0.05333,1,-0.81556,0,0,1,-0.04000,1,-0.18667,0,0,1,-1,0,0,1,0.11778,0.90667,-0.09556,0
1,0,0.81143,0.03714,0.85143,-0.00143,0.79000,0.00714,0.79571,-0.04286,0.87571,0,0.85571,-0.06714,0.86429,0.00286,0.82857,-0.05429,0.81000,-0.11857,0.76857,-0.08429,0.84286,-0.05000,0.77000,-0.06857,0.81598,-0.08669,0.82571,-0.10429,0.81429,-0.05000,0.82143,-0.15143,1
1,0,0,0,0,0,0,0,0,0,0,0,-1,1,1,0.55172,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0
1,0,0.49870,0.01818,0.43117,-0.09610,0.50649,-0.04156,0.50130,0.09610,0.44675,0.05974,0.55844,-0.11948,0.51688,-0.03636,0.52727,-0.05974,0.55325,-0.01039,0.48571,-0.03377,0.49091,-0.01039,0.59221,0,0.53215,-0.03280,0.43117,0.03377,0.54545,-0.05455,0.58961,-0.08571,1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,-1,0,0,0,0,0,0,0
1,0,1,0.5,1,0.25000,0.25000,1,0.16851,0.91180,-0.13336,0.80454,-0.34107,0.60793,-0.43820,0.37856,-0.43663,0.16709,-0.36676,0.00678,-0.26477,-0.09025,-0.16178,-0.12964,-0.07782,-0.12744,-0.02089,-0.10242,0.01033,-0.07036,0.02224,-0.04142,0.02249,-0.02017,1
1,0,0,0,0,0,1,1,-1,-1,0,0,1,-0.11111,0,0,0,0,-1,1,1,1,1,-1,0,0,1,-1,0,0,0,0,1,1,0
1,0,0.87048,0.38027,0.64099,0.69212,0.31347,0.86625,-0.03933,0.90740,-0.42173,0.79346,-0.70561,0.51560,-0.81049,0.22735,-0.81136,-0.12539,-0.67474,-0.38102,-0.38334,-0.62861,-0.13013,-0.70762,0.15552,-0.66421,0.38544,-0.51568,0.52573,-0.29897,0.56239,-0.05938,0.51460,0.16645,1
1,0,0,0,0,0,0,0,-1,1,0,0,1,0.37333,-0.12000,-0.12000,0,0,-1,-1,0,0,1,-1,0,0,1,0.22667,0,0,0,0,0,0,0
1,0,0.88179,0.43491,0.59573,0.77655,0.19672,0.94537,-0.24103,0.92544,-0.62526,0.71257,-0.86443,0.33652,-0.92384,-0.05338,-0.77356,-0.44707,-0.46950,-0.73285,-0.10237,-0.82217,0.26384,-0.77570,0.55984,-0.55910,0.72147,-0.24433,0.72478,0.09599,0.58137,0.38915,0.34749,0.57656,1
1,0,0.32834,0.02520,0.15236,0.21278,0.14919,0.74003,-0.25706,0.92324,-0.10312,0.19380,-0.61352,0.25786,-0.94053,-0.05409,-0.13117,-0.14329,-0.30315,-0.44615,-0.11409,-0.85597,0.02668,-0.22786,0.27942,-0.06295,0.33737,-0.11876,0.27657,-0.11409,0.15078,0.13296,0.12197,0.20468,1
1,0,0.83427,0.39121,0.54040,0.78579,0.12326,0.89402,-0.33221,0.83578,-0.70086,0.59564,-0.86622,0.21909,-0.84442,-0.24164,-0.59714,-0.61894,-0.19354,-0.87787,0.12439,-0.89064,0.51109,-0.72454,0.79143,-0.27734,0.83008,0.08718,0.66592,0.49079,0.37542,0.70011,-0.03983,0.79444,1
1,0,0.62335,-0.03490,0.59085,0.00481,0.60409,-0.07461,0.63177,0.00963,0.62455,-0.07461,0.67028,0.07220,0.62936,-0.08424,0.67509,0.09146,0.67148,0,0.58965,0.10108,0.50060,0.03129,0.65945,0.14079,0.60463,0.02019,0.51384,0.04452,0.61733,-0.00963,0.61372,-0.09146,1
1,0,0.74449,-0.02390,0.70772,0.03309,0.72243,0.16912,0.79228,0.07721,0.81434,0.43934,0.63787,0.00551,0.70772,0.21691,1,0.06066,0.61029,0.05147,0.67463,0.04228,0.52022,-0.25000,0.72978,-0.15809,0.61727,0.07124,0.30882,0.08640,0.55916,0.07458,0.60294,0.21691,1
1,0,0.61538,0.18923,0.78157,0.01780,0.77486,0.02647,0.65077,-0.10308,0.77538,0.08000,0.73961,0.05060,0.72322,0.05776,0.68615,-0.08923,0.61692,0.16308,0.66233,0.07573,0.63878,0.08041,0.60154,-0.07231,0.58803,0.08767,0.55077,0.25692,0.53389,0.09207,0.50609,0.09322,1
1,0,0.68317,0.05375,0.84803,0.00202,0.84341,0.00301,0.84300,0.09901,0.75813,0.04102,0.81892,0.00585,0.80738,0.00673,0.80622,-0.12447,0.77935,-0.03536,0.76365,0.00909,0.74635,0.00978,0.79632,-0.04243,0.70824,0.01096,0.62235,0.11598,0.66624,0.01190,0.64407,0.01227,1
1,0,0.5,0,0.38696,0.10435,0.49130,0.06522,0.46957,-0.03913,0.35652,-0.12609,0.45652,0.04783,0.50435,0.02609,0.35652,0.19565,0.42174,0.14783,0.42174,-0.02609,0.32174,-0.11304,0.47391,-0.00870,0.41789,0.06908,0.38696,0.03913,0.35217,0.14783,0.44783,0.17391,1
1,0,0.79830,0.09417,0.78129,0.20656,0.71628,0.28068,0.69320,0.41252,0.65917,0.50122,0.57898,0.60814,0.49210,0.58445,0.33354,0.67861,0.29587,0.63548,0.09599,0.68104,0.02066,0.72236,-0.08748,0.63183,-0.11925,0.60696,-0.18226,0.56015,-0.25516,0.51701,-0.27339,0.42467,1
1,0,1,0.09802,1,0.25101,0.98390,0.33044,0.80365,0.53020,0.74977,0.60297,0.56937,0.71942,0.55311,0.74079,0.29452,0.82193,0.21137,0.79777,0.09709,0.82162,-0.01734,0.79870,-0.15144,0.75596,-0.22839,0.69187,-0.31713,0.60948,-0.40291,0.54522,-0.42815,0.44534,1
1,0,0.89410,0.13425,0.87001,0.31543,0.78896,0.43388,0.63388,0.59975,0.54003,0.71016,0.39699,0.76161,0.24266,0.79523,0.09134,0.79598,-0.09159,0.76261,-0.20201,0.66926,-0.30263,0.62610,-0.40552,0.50489,-0.46215,0.40753,-0.50314,0.27252,-0.52823,0.19172,-0.48808,0.05972,1
1,0,0.94631,0.17498,0.90946,0.33143,0.85096,0.49960,0.73678,0.63842,0.59215,0.73838,0.48698,0.83614,0.30459,0.90665,0.17959,0.93429,-0.00701,0.93109,-0.18880,0.89383,-0.33023,0.82492,-0.46534,0.76482,-0.58563,0.66335,-0.67929,0.52564,-0.75321,0.42488,-0.81210,0.26092,1
1,0,0.91767,0.18198,0.86090,0.35543,0.72873,0.45747,0.60425,0.69865,0.50376,0.74922,0.36100,0.81795,0.15664,0.83558,0.00396,0.85210,-0.16390,0.77853,-0.35996,0.76193,-0.43087,0.65385,-0.53140,0.53886,-0.60328,0.40972,-0.64511,0.27338,-0.65710,0.13667,-0.64056,0.05394,1
1,0,0.76627,0.21106,0.63935,0.38112,0.48409,0.52500,0.15000,0.22273,0.13753,0.59565,-0.07727,0.44545,0,0.48636,-0.27491,0.42014,-0.56136,0.36818,-0.36591,0.18864,-0.40533,0.07588,-0.38483,-0.03229,-0.33942,-0.12486,-0.27540,-0.19714,-0.19962,-0.24648,-0.11894,-0.27218,1
1,0,0.58940,-0.60927,0.85430,0.55298,0.81126,0.07285,0.56623,0.16225,0.32781,0.24172,0.50331,0.12252,0.63907,0.19868,0.71854,0.42715,0.54305,0.13907,0.65232,0.27815,0.68874,0.07285,0.51872,0.26653,0.49013,0.27687,0.46216,0.28574,0.43484,0.29324,0.40821,0.29942,1
1,0,1,0.11385,0.70019,-0.12144,0.81594,0.09677,0.71157,0.01139,0.56167,-0.07780,0.69070,0.12524,0.58634,0.03985,0.53131,-0.03416,0.69450,0.16888,0.72676,0.07211,0.32068,0.05882,0.53321,0.37381,0.49090,0.17951,0.15180,0.32448,0.44141,0.18897,0.56167,0.15180,1
1,0,0.84843,0.06794,0.80562,-0.02299,0.77031,-0.03299,0.66725,-0.06620,0.59582,-0.07666,0.67260,-0.05771,0.64260,-0.06438,0.39199,0.04530,0.71254,0.01394,0.55970,-0.08039,0.53430,-0.08453,0.47038,-0.22822,0.48659,-0.09128,0.52613,-0.08537,0.44277,-0.09621,0.42223,-0.09808,1
1,0,1,0.08013,0.96775,-0.00482,0.96683,-0.00722,0.87980,-0.03923,1,0.01419,0.96186,-0.01436,0.95947,-0.01671,0.98497,0.01002,0.91152,-0.08848,0.95016,-0.02364,0.94636,-0.02591,0.98164,0.02003,0.93772,-0.03034,1,-0.05843,0.92774,-0.03464,0.92226,-0.03673,1
1,0,0.47938,-0.12371,0.42784,-0.12371,0.70103,-0.39175,0.73196,0.07216,0.26289,-0.21649,0.49485,0.15979,0.45361,-0.11856,0.42268,0.06186,0.5,-0.27320,0.54639,0.18557,0.42268,0.08247,0.70619,0.19588,0.53396,-0.12447,0.15464,-0.26289,0.47423,0.04124,0.45361,-0.51546,1
1,0,0.63510,-0.04388,0.76530,0.02968,0.61432,0.36028,0.65358,-0.00462,0.64203,0.08314,0.79446,-0.43418,0.72517,0.54965,0.59584,0.13857,0.63510,0.21940,0.63279,-0.25404,0.70951,0.15359,0.64665,0.23095,0.68775,0.17704,0.61663,0.07621,0.66316,0.19841,0.69053,0.36721,1
1,0,0.50112,-0.03596,0.61124,0.01348,0.58876,0.01573,0.58876,0.02472,0.66742,-0.00449,0.71685,-0.04719,0.66517,0.00899,0.57303,0.02472,0.64719,-0.07416,0.56854,0.14157,0.57528,-0.03596,0.46517,0.04944,0.56588,0.00824,0.47640,-0.03596,0.54607,0.10562,0.60674,-0.08090,1
1,0,0.71521,-0.00647,0.66667,-0.04207,0.63107,-0.05178,0.77994,0.08091,0.67314,0.09709,0.64725,0.15858,0.60194,-0.01942,0.54369,-0.04531,0.46926,-0.10032,0.64725,0.14887,0.39159,0.21683,0.52427,-0.05502,0.45105,0.00040,0.31392,-0.06796,0.49191,-0.10680,0.30421,-0.05178,1
1,0,0.68148,0.10370,0.77037,0.03457,0.65185,0.08148,0.60988,-0.00494,0.79012,0.11852,0.59753,0.04938,0.62469,0.09630,0.78272,-0.17531,0.73827,-0.10864,0.48642,0.00988,0.60988,0.08148,0.66667,-0.12840,0.63773,-0.02451,0.76543,0.02222,0.61235,-0.07160,0.51358,-0.04691,1
1,0,0.60678,-0.02712,0.67119,0.04068,0.52881,-0.04407,0.50508,0.03729,0.70508,-0.07797,0.57966,-0.02034,0.53220,0.07797,0.64068,0.11864,0.56949,-0.02373,0.53220,0.00678,0.71525,-0.03390,0.52881,-0.03390,0.57262,0.00750,0.58644,-0.00339,0.58983,-0.02712,0.50169,0.06780,1
1,0,0.49515,0.09709,0.29612,0.05825,0.34951,0,0.57282,-0.02427,0.58252,0.02427,0.33495,0.04854,0.52427,0.00485,0.47087,-0.10680,0.43204,0.00485,0.34951,0.05825,0.18932,0.25728,0.31068,-0.15049,0.36547,0.03815,0.39320,0.17476,0.26214,0,0.37379,-0.01942,1
1,0,0.98822,0.02187,0.93102,0.34100,0.83904,0.35222,0.74706,0.48906,0.73584,0.51879,0.55076,0.60179,0.43130,0.66237,0.31800,0.70443,0.28379,0.68873,0.07515,0.73696,0.06338,0.71284,-0.16489,0.69714,-0.16556,0.60510,-0.16209,0.55805,-0.34717,0.44195,-0.33483,0.37465,1
1,0,0.97905,0.15810,0.90112,0.35237,0.82039,0.48561,0.71760,0.64888,0.58827,0.73743,0.40349,0.83156,0.25140,0.84804,0.04700,0.85475,-0.12193,0.79749,-0.26180,0.80754,-0.37835,0.71676,-0.51034,0.58324,-0.57587,0.46040,-0.61899,0.30796,-0.65754,0.18345,-0.64134,0.02968,1
1,0,0.99701,0.21677,0.91966,0.47030,0.76902,0.62415,0.53312,0.78120,0.36774,0.88291,0.10107,0.83312,-0.06827,0.89274,-0.28269,0.72073,-0.43707,0.61688,-0.55769,0.48120,-0.65000,0.35534,-0.64658,0.15908,-0.66651,0.02277,-0.64872,-0.13462,-0.54615,-0.22949,-0.47201,-0.35032,1
1,0,0.94331,0.19959,0.96132,0.40803,0.80514,0.56569,0.56687,0.70830,0.41836,0.83230,0.14939,0.89489,0.05167,0.93682,-0.24742,0.83939,-0.42811,0.75554,-0.50251,0.62563,-0.65515,0.50428,-0.68851,0.30912,-0.77097,0.15619,-0.75406,-0.04399,-0.75199,-0.17921,-0.66932,-0.34367,1
1,0,0.93972,0.28082,0.80486,0.52821,0.58167,0.73151,0.34961,0.80511,0.10797,0.90403,-0.20015,0.89335,-0.39730,0.82163,-0.58835,0.62867,-0.76305,0.40368,-0.81262,0.18888,-0.81317,-0.04284,-0.75273,-0.26883,-0.63237,-0.46438,-0.46422,-0.61446,-0.26389,-0.70835,-0.08937,-0.71273,1
1,0,0.89835,0.35157,0.67333,0.62233,0.43898,0.94353,-0.03643,0.80510,-0.22838,0.75334,-0.25137,0.48816,-0.57377,0.28415,-0.66750,0.10591,-0.47359,-0.06193,-0.81056,-0.06011,-0.33197,-0.47592,-0.12897,-0.53620,0.07158,-0.51925,0.24321,-0.43478,0.36586,-0.30057,0.42805,0.13297,1
1,0,0.29073,0.10025,0.23308,0.17293,0.03759,0.34336,0.12030,0.26316,0.06266,0.21303,-0.04725,0.12767,-0.06333,0.07907,-0.06328,0.04097,-0.05431,0.01408,-0.04166,-0.00280,-0.02876,-0.01176,-0.01755,-0.01505,-0.00886,-0.01475,-0.00280,-0.01250,0.00096,-0.00948,0.00290,-0.00647,1
1,0,0.58459,-0.35526,1,0.35338,0.75376,-0.00564,0.82519,0.19361,0.50188,-0.27632,0.65977,0.06391,0.69737,0.14662,0.72368,-0.42669,0.76128,0.04511,0.66917,0.20489,0.84774,-0.40977,0.64850,-0.04699,0.56836,-0.10571,0.52820,-0.13346,0.15602,-0.12218,0.44767,-0.10309,1
1,0,0.83609,0.13215,0.72171,0.06059,0.65829,0.08315,0.23888,0.12961,0.43837,0.20330,0.49418,0.12686,0.44747,0.13507,0.29352,0.02922,0.48158,0.15756,0.32835,0.14616,0.29495,0.14638,0.26436,0.14530,0.23641,0.14314,0.26429,0.16137,0.18767,0.13632,0.16655,0.13198,1
1,0,0.94080,0.11933,0.85738,0.01038,0.85124,0.01546,0.76966,-0.00278,0.84459,0.10916,0.83289,0.03027,0.82680,0.03506,0.74838,0.01943,0.80019,0.02405,0.80862,0.04901,0.80259,0.05352,0.77336,0.02220,0.79058,0.06235,0.85939,0.09251,0.77863,0.07090,0.77269,0.07508,1
1,0,0.87111,0.04326,0.79946,0.18297,0.99009,0.29292,0.89455,-0.08337,0.88598,-0.02028,0.90446,-0.26724,0.89410,0.19964,0.88644,-0.04642,0.84452,-0.00991,0.97882,-0.34024,0.78954,-0.25101,0.86661,-0.09193,0.85967,-0.02908,0.78774,-0.04101,0.75935,0.21812,0.88238,0.09193,1
1,0,0.74916,0.02549,0.98994,0.09792,0.75855,0.12877,0.74313,-0.09188,0.95842,0.02482,0.97921,-0.00469,0.96110,0.10195,0.91482,0.03756,0.71026,0.02683,0.81221,-0.08048,1,0,0.71764,-0.01207,0.82271,0.02552,0.72435,-0.01073,0.90409,0.11066,0.72837,0.02750,1
1,0,0.47337,0.19527,0.06213,-0.18343,0.62316,0.01006,0.45562,-0.04438,0.56509,0.01775,0.44675,0.27515,0.71598,-0.03846,0.55621,0.12426,0.41420,0.11538,0.52767,0.02842,0.51183,-0.10651,0.47929,-0.02367,0.46514,0.03259,0.53550,0.25148,0.31953,-0.14497,0.34615,-0.00296,1
1,0,0.59887,0.14689,0.69868,-0.13936,0.85122,-0.13936,0.80979,0.02448,0.50471,0.02825,0.67420,-0.04520,0.80791,-0.13748,0.51412,-0.24482,0.81544,-0.14313,0.70245,-0.00377,0.33333,0.06215,0.56121,-0.33145,0.61444,-0.16837,0.52731,-0.02072,0.53861,-0.31262,0.67420,-0.22034,1
1,0,0.84713,-0.03397,0.86412,-0.08493,0.81953,0,0.73673,-0.07643,0.71975,-0.13588,0.74947,-0.11677,0.77495,-0.18684,0.78132,-0.21231,0.61996,-0.10191,0.79193,-0.15711,0.89384,-0.03397,0.84926,-0.26115,0.74115,-0.23312,0.66242,-0.22293,0.72611,-0.37792,0.65817,-0.24841,1
1,0,0.87772,-0.08152,0.83424,0.07337,0.84783,0.04076,0.77174,-0.02174,0.77174,-0.05707,0.82337,-0.10598,0.67935,-0.00543,0.88043,-0.20924,0.83424,0.03261,0.86413,-0.05978,0.97283,-0.27989,0.85054,-0.18750,0.83705,-0.10211,0.85870,-0.03261,0.78533,-0.10870,0.79076,-0.00543,1
1,0,0.74704,-0.13241,0.53755,0.16996,0.72727,0.09486,0.69565,-0.11067,0.66798,-0.23518,0.87945,-0.19170,0.73715,0.04150,0.63043,-0.00395,0.63636,-0.11858,0.79249,-0.25296,0.66403,-0.28656,0.67194,-0.10474,0.61847,-0.12041,0.60079,-0.20949,0.37549,0.06917,0.61067,-0.01383,1
1,0,0.46785,0.11308,0.58980,0.00665,0.55432,0.06874,0.47894,-0.13969,0.52993,0.01330,0.63858,-0.16186,0.67849,-0.03326,0.54545,-0.13525,0.52993,-0.04656,0.47894,-0.19512,0.50776,-0.13525,0.41463,-0.20177,0.53930,-0.11455,0.59867,-0.02882,0.53659,-0.11752,0.56319,-0.04435,1
1,0,0.88116,0.27475,0.72125,0.42881,0.61559,0.63662,0.38825,0.90502,0.09831,0.96128,-0.20097,0.89200,-0.35737,0.77500,-0.65114,0.62210,-0.78768,0.45535,-0.81856,0.19095,-0.83943,-0.08079,-0.78334,-0.26356,-0.67557,-0.45511,-0.54732,-0.60858,-0.30512,-0.66700,-0.19312,-0.75597,1
1,0,0.93147,0.29282,0.79917,0.55756,0.59952,0.71596,0.26203,0.92651,0.04636,0.96748,-0.23237,0.95130,-0.55926,0.81018,-0.73329,0.62385,-0.90995,0.36200,-0.92254,0.06040,-0.93618,-0.19838,-0.83192,-0.46906,-0.65165,-0.69556,-0.41223,-0.85725,-0.13590,-0.93953,0.10007,-0.94823,1
1,0,0.88241,0.30634,0.73232,0.57816,0.34109,0.58527,0.05717,1,-0.09238,0.92118,-0.62403,0.71996,-0.69767,0.32558,-0.81422,0.41195,-1,-0.00775,-0.78973,-0.41085,-0.76901,-0.45478,-0.57242,-0.67605,-0.31610,-0.81876,-0.02979,-0.86841,0.25392,-0.82127,0.00194,-0.81686,1
1,0,0.83479,0.28993,0.69256,0.47702,0.49234,0.68381,0.21991,0.86761,-0.08096,0.85011,-0.35558,0.77681,-0.52735,0.58425,-0.70350,0.31291,-0.75821,0.03939,-0.71225,-0.15317,-0.58315,-0.39168,-0.37199,-0.52954,-0.16950,-0.60863,0.08425,-0.61488,0.25164,-0.48468,0.40591,-0.35339,1
1,0,0.92870,0.33164,0.76168,0.62349,0.49305,0.84266,0.21592,0.95193,-0.13956,0.96167,-0.47202,0.83590,-0.70747,0.65490,-0.87474,0.36750,-0.91814,0.05595,-0.89824,-0.26173,-0.73969,-0.54069,-0.50757,-0.74735,-0.22323,-0.86122,0.07810,-0.87159,0.36021,-0.78057,0.59407,-0.60270,1
1,0,0.83367,0.31456,0.65541,0.57671,0.34962,0.70677,0.17293,0.78947,-0.18976,0.79886,-0.41729,0.66541,-0.68421,0.47744,-0.74725,0.19492,-0.72180,-0.04887,-0.62030,-0.28195,-0.49165,-0.53463,-0.26577,-0.66014,-0.01530,-0.69706,0.22708,-0.64428,0.43100,-0.51206,0.64662,-0.30075,1
1,0,0.98455,-0.02736,0.98058,-0.04104,1,-0.07635,0.98720,0.01456,0.95278,-0.02604,0.98500,-0.07458,0.99382,-0.07149,0.97396,-0.09532,0.97264,-0.12224,0.99294,-0.05252,0.95278,-0.08914,0.97352,-0.08341,0.96653,-0.12912,0.93469,-0.14916,0.97132,-0.15755,0.96778,-0.18800,1
1,0,0.94052,-0.01531,0.94170,0.01001,0.94994,-0.01472,0.95878,-0.01060,0.94641,-0.03710,0.97173,-0.01767,0.97055,-0.03887,0.95465,-0.04064,0.95230,-0.04711,0.94229,-0.02179,0.92815,-0.04417,0.92049,-0.04476,0.92695,-0.05827,0.90342,-0.07479,0.91991,-0.07244,0.92049,-0.07420,1
1,0,0.97032,-0.14384,0.91324,-0.00228,0.96575,-0.17123,0.98630,0.18265,0.91781,0.00228,0.93607,-0.08447,0.91324,-0.00228,0.86758,-0.08676,0.97032,-0.21233,1,0.10274,0.92009,-0.05251,0.92466,0.06849,0.94043,-0.09252,0.97032,-0.20091,0.85388,-0.08676,0.96575,-0.21918,1
1,0,0.52542,-0.03390,0.94915,0.08475,0.52542,-0.16949,0.30508,-0.01695,0.50847,-0.13559,0.64407,0.28814,0.83051,-0.35593,0.54237,0.01695,0.55932,0.03390,0.59322,0.30508,0.86441,0.05085,0.40678,0.15254,0.67287,-0.00266,0.66102,-0.03390,0.83051,-0.15254,0.76271,-0.10169,1
1,0,0.33333,-0.25000,0.44444,0.22222,0.38889,0.16667,0.41667,0.13889,0.5,-0.11111,0.54911,-0.08443,0.58333,0.33333,0.55556,0.02778,0.25000,-0.19444,0.47222,-0.05556,0.52778,-0.02778,0.38889,0.08333,0.41543,-0.14256,0.19444,-0.13889,0.36924,-0.14809,0.08333,-0.5,1
1,0,0.51207,1,1,0.53810,0.71178,0.80833,0.45622,0.46427,0.33081,1,0.21249,1,-0.17416,1,-0.33081,0.98722,-0.61382,1,-0.52674,0.71699,-0.88500,0.47894,-1,0.35175,-1,0.09569,-1,-0.16713,-1,-0.42226,-0.91903,-0.65557,1
1,0,0.75564,0.49638,0.83550,0.54301,0.54916,0.72063,0.35225,0.70792,0.13469,0.94749,-0.09818,0.93778,-0.37604,0.82223,-0.52742,0.71161,-0.68358,0.67989,-0.70163,0.24956,-0.79147,0.02995,-0.98988,-0.29099,-0.70352,-0.32792,-0.63312,-0.19185,-0.34131,-0.60454,-0.19609,-0.62956,1
1,0,0.83789,0.42904,0.72113,0.58385,0.45625,0.78115,0.16470,0.82732,-0.13012,0.86947,-0.46177,0.78497,-0.59435,0.52070,-0.78470,0.26529,-0.84014,0.03928,-0.62041,-0.31351,-0.47412,-0.48905,-0.37298,-0.67796,-0.05054,-0.62691,0.14690,-0.45911,0.37093,-0.39167,0.48319,-0.24313,1
1,0,0.93658,0.35107,0.75254,0.65640,0.45571,0.88576,0.15323,0.95776,-0.21775,0.96301,-0.56535,0.83397,-0.78751,0.58045,-0.93104,0.26020,-0.93641,-0.06418,-0.87028,-0.40949,-0.65079,-0.67464,-0.36799,-0.84951,-0.04578,-0.91221,0.27330,-0.85762,0.54827,-0.69613,0.74828,-0.44173,1
1,0,0.92436,0.36924,0.71976,0.68420,0.29303,0.94078,-0.11108,0.76527,-0.31605,0.92453,-0.66616,0.78766,-0.92145,0.42314,-0.94315,0.09585,-1,0.03191,-0.66431,-0.66278,-0.46010,-0.78174,-0.13486,-0.88082,0.19765,-0.85137,0.48904,-0.70247,0.69886,-0.46048,0.76066,-0.13194,1
1,0,1,0.16195,1,-0.05558,1,0.01373,1,-0.12352,1,-0.01511,1,-0.01731,1,-0.06374,1,-0.07157,1,0.05900,1,-0.10108,1,-0.02685,1,-0.22978,1,-0.06823,1,0.08299,1,-0.14194,1,-0.07439,1
1,0,0.95559,-0.00155,0.86421,-0.13244,0.94982,-0.00461,0.82809,-0.51171,0.92441,0.10368,1,-0.14247,0.99264,-0.02542,0.95853,-0.15518,0.84013,0.61739,1,-0.16321,0.87492,-0.08495,0.85741,-0.01664,0.84132,-0.01769,0.82427,-0.01867,0.80634,-0.01957,0.78761,-0.02039,1
1,0,0.79378,0.29492,0.64064,0.52312,0.41319,0.68158,0.14177,0.83548,-0.16831,0.78772,-0.42911,0.72328,-0.57165,0.41471,-0.75436,0.16755,-0.69977,-0.09856,-0.57695,-0.23503,-0.40637,-0.38287,-0.17437,-0.52540,0.01523,-0.48707,0.19030,-0.38059,0.31008,-0.23199,0.34572,-0.08036,1
1,0,0.88085,0.35232,0.68389,0.65128,0.34816,0.79784,0.05832,0.90842,-0.29784,0.86490,-0.62635,0.69590,-0.77106,0.39309,-0.85803,0.08408,-0.81641,-0.24017,-0.64579,-0.50022,-0.39766,-0.68337,-0.11147,-0.75533,0.17041,-0.71504,0.40675,-0.57649,0.56626,-0.36765,0.62765,-0.13305,1
1,0,0.89589,0.39286,0.66129,0.71804,0.29521,0.90824,-0.04787,0.94415,-0.45725,0.84605,-0.77660,0.58511,-0.92819,0.25133,-0.92282,-0.15315,-0.76064,-0.48404,-0.50931,-0.76197,-0.14895,-0.88591,0.21581,-0.85703,0.53229,-0.68593,0.74846,-0.40656,0.83142,-0.07029,0.76862,0.27926,1
1,0,1,-0.24051,1,-0.20253,0.87342,-0.10127,0.88608,0.01266,1,0.11392,0.92405,0.06329,0.84810,-0.03797,0.63291,-0.36709,0.87342,-0.01266,0.93671,0.06329,1,0.25316,0.62025,-0.37975,0.84637,-0.05540,1,-0.06329,0.53165,0.02532,0.83544,-0.02532,1
1,0,0.74790,0.00840,0.83312,0.01659,0.82638,0.02469,0.86555,0.01681,0.60504,0.05882,0.79093,0.04731,0.77441,0.05407,0.64706,0.19328,0.84034,0.04202,0.71285,0.07122,0.68895,0.07577,0.66387,0.08403,0.63728,0.08296,0.61345,0.01681,0.58187,0.08757,0.55330,0.08891,1
1,0,0.85013,0.01809,0.92211,0.01456,0.92046,0.02180,0.92765,0.08010,0.87597,0.11370,0.91161,0.04320,0.90738,0.05018,0.87339,0.02842,0.95866,0,0.89097,0.07047,0.88430,0.07697,0.83721,0.10853,0.86923,0.08950,0.87597,0.08786,0.85198,0.10134,0.84258,0.10698,1
1,0,1,-0.01179,1,-0.00343,1,-0.01565,1,-0.01565,1,-0.02809,1,-0.02187,0.99828,-0.03087,0.99528,-0.03238,0.99314,-0.03452,1,-0.03881,1,-0.05039,1,-0.04931,0.99842,-0.05527,0.99400,-0.06304,0.99057,-0.06497,0.98971,-0.06668,1
1,0,0.89505,-0.03168,0.87525,0.05545,0.89505,0.01386,0.92871,0.02772,0.91287,-0.00990,0.94059,-0.01584,0.91881,0.03366,0.93663,0,0.94257,0.01386,0.90495,0.00792,0.88713,-0.01782,0.89307,0.02376,0.89002,0.01611,0.88119,0.00198,0.87327,0.04158,0.86733,0.02376,1
1,0,0.90071,0.01773,1,-0.01773,0.90071,0.00709,0.84752,0.05674,1,0.03546,0.97872,0.01064,0.97518,0.03546,1,-0.03191,0.89716,-0.03191,0.86170,0.07801,1,0.09220,0.90071,0.04610,0.94305,0.03247,0.94681,0.02482,1,0.01064,0.93617,0.02128,1
1,0,0.39394,-0.24242,0.62655,0.01270,0.45455,0.09091,0.63636,0.09091,0.21212,-0.21212,0.57576,0.15152,0.39394,0,0.56156,0.04561,0.51515,0.03030,0.78788,0.18182,0.30303,-0.15152,0.48526,0.05929,0.46362,0.06142,0.33333,-0.03030,0.41856,0.06410,0.39394,0.24242,1
1,0,0.86689,0.35950,0.72014,0.66667,0.37201,0.83049,0.08646,0.85893,-0.24118,0.86121,-0.51763,0.67577,-0.68714,0.41524,-0.77019,0.09898,-0.69397,-0.13652,-0.49488,-0.42207,-0.32537,-0.57679,-0.02844,-0.59954,0.15360,-0.53127,0.32309,-0.37088,0.46189,-0.19681,0.40956,0.01820,1
1,0,0.89563,0.37917,0.67311,0.69438,0.35916,0.88696,-0.04193,0.93345,-0.38875,0.84414,-0.67274,0.62078,-0.82680,0.30356,-0.86150,-0.05365,-0.73564,-0.34275,-0.51778,-0.62443,-0.23428,-0.73855,0.06911,-0.73856,0.33531,-0.62296,0.52414,-0.42086,0.61217,-0.17343,0.60073,0.08660,1
1,0,0.90547,0.41113,0.65354,0.74761,0.29921,0.95905,-0.13342,0.97820,-0.52236,0.83263,-0.79657,0.55086,-0.96631,0.15192,-0.93001,-0.25554,-0.71863,-0.59379,-0.41546,-0.85205,-0.02250,-0.93788,0.36318,-0.85368,0.67538,-0.61959,0.85977,-0.28123,0.88654,0.09800,0.75495,0.46301,1
1,0,1,1,0.36700,0.06158,0.12993,0.92713,-0.27586,0.93596,-0.31527,0.37685,-0.87192,0.36946,-0.92857,-0.08867,-0.38916,-0.34236,-0.46552,-0.82512,-0.05419,-0.93596,0.25616,-0.20443,0.73792,-0.45950,0.85471,-0.06831,1,1,0.38670,0.00246,0.17758,0.79790,1
1,0,1,0.51515,0.45455,0.33333,0.06061,0.36364,-0.32104,0.73062,-0.45455,0.48485,-0.57576,0,-0.57576,-0.12121,-0.33333,-0.48485,-0.09091,-0.84848,0.48485,-0.57576,0.57576,-0.42424,1,-0.39394,0.72961,0.12331,0.96970,0.57576,0.24242,0.36364,0.09091,0.33333,1
1,0,0.88110,0,0.94817,-0.02744,0.93598,-0.01220,0.90244,0.01829,0.90244,0.01829,0.93902,0.00915,0.95732,0.00305,1,0.02744,0.94207,-0.01220,0.90854,0.02439,0.91463,0.05488,0.99695,0.04878,0.89666,0.02226,0.90854,0.00915,1,0.05488,0.97561,-0.01220,1
1,0,0.82624,0.08156,0.79078,-0.08156,0.90426,-0.01773,0.92908,0.01064,0.80142,0.08865,0.94681,-0.00709,0.94326,0,0.93262,0.20213,0.95035,-0.00709,0.91489,0.00709,0.80496,0.07092,0.91135,0.15957,0.89527,0.08165,0.77660,0.06738,0.92553,0.18085,0.92553,0,1
1,0,0.74468,0.10638,0.88706,0.00982,0.88542,0.01471,0.87234,-0.01418,0.73050,0.10638,0.87657,0.02912,0.87235,0.03382,0.95745,0.07801,0.95035,0.04255,0.85597,0.04743,0.84931,0.05178,0.87234,0.11348,0.83429,0.06014,0.74468,-0.03546,0.81710,0.06800,0.80774,0.07173,1
1,0,0.87578,0.03727,0.89951,0.00343,0.89210,0.00510,0.86335,0,0.95031,0.07453,0.87021,0.00994,0.86303,0.01151,0.83851,-0.06211,0.85714,0.02484,0.84182,0.01603,0.83486,0.01749,0.79503,-0.04348,0.82111,0.02033,0.81988,0.08696,0.80757,0.02308,0.80088,0.02441,1
1,0,0.97513,0.00710,0.98579,0.01954,1,0.01954,0.99290,0.01599,0.95737,0.02309,0.97158,0.03552,1,0.03730,0.97869,0.02131,0.98579,0.05684,0.97158,0.04796,0.94494,0.05506,0.98401,0.03552,0.97540,0.06477,0.94849,0.08171,0.99112,0.06217,0.98934,0.09947,1
1,0,1,0.01105,1,0.01105,1,0.02320,0.99448,-0.01436,0.99448,-0.00221,0.98343,0.02320,1,0.00884,0.97569,0.00773,0.97901,0.01657,0.98011,0.00663,0.98122,0.02099,0.97127,-0.00663,0.98033,0.01600,0.97901,0.01547,0.98564,0.02099,0.98674,0.02762,1
1,0,1,-0.01342,1,0.01566,1,-0.00224,1,0.06264,0.97763,0.04474,0.95973,0.02908,1,0.06488,0.98881,0.03356,1,0.03579,0.99776,0.09396,0.95749,0.07383,1,0.10067,0.99989,0.08763,0.99105,0.08501,1,0.10067,1,0.10067,1
1,0,0.88420,0.36724,0.67123,0.67382,0.39613,0.86399,0.02424,0.93182,-0.35148,0.83713,-0.60316,0.58842,-0.78658,0.38778,-0.83285,-0.00642,-0.69318,-0.32963,-0.52504,-0.53924,-0.27377,-0.68126,0.00806,-0.69774,0.26028,-0.60678,0.44569,-0.43383,0.54209,-0.21542,0.56286,0.02823,1
1,0,0.90147,0.41786,0.64131,0.75725,0.30440,0.95148,-0.20449,0.96534,-0.55483,0.81191,-0.81857,0.50949,-0.96986,0.10345,-0.91456,-0.31412,-0.70163,-0.65461,-0.32354,-0.88999,0.05865,-0.94172,0.44483,-0.82154,0.74105,-0.55231,0.89415,-0.18725,0.87893,0.20359,0.70555,0.54852,1
1,0,0.32789,0.11042,0.15970,0.29308,0.14020,0.74485,-0.25131,0.91993,-0.16503,0.26664,-0.63714,0.24865,-0.97650,-0.00337,-0.23227,-0.19909,-0.30522,-0.48886,-0.14426,-0.89991,0.09345,-0.28916,0.28307,-0.18560,0.39599,-0.11498,0.31005,0.05614,0.21443,0.20540,0.13376,0.26422,1
1,0,0.65845,0.43617,0.44681,0.74804,0.05319,0.85106,-0.32027,0.82139,-0.68253,0.52408,-0.84211,0.07111,-0.82811,-0.28723,-0.47032,-0.71725,-0.04759,-0.86002,0.23292,-0.76316,0.56663,-0.52128,0.74300,-0.18645,0.74758,0.23713,0.45185,0.59071,0.20549,0.76764,-0.18533,0.74356,1
1,0,0.19466,0.05725,0.04198,0.25191,-0.10557,0.48866,-0.18321,-0.18321,-0.41985,0.06107,-0.45420,0.09160,-0.16412,-0.30534,-0.10305,-0.39695,0.18702,-0.17557,0.34012,-0.11953,0.28626,-0.16031,0.21645,0.24692,0.03913,0.31092,-0.03817,0.26336,-0.16794,0.16794,-0.30153,-0.33588,1
1,0,0.98002,0.00075,1,0,0.98982,-0.00075,0.94721,0.02394,0.97700,0.02130,0.97888,0.03073,0.99170,0.02338,0.93929,0.05713,0.93552,0.05279,0.97738,0.05524,1,0.06241,0.94155,0.08107,0.96709,0.07255,0.95701,0.08088,0.98190,0.08126,0.97247,0.08616,1
1,0,0.82254,-0.07572,0.80462,0.00231,0.87514,-0.01214,0.86821,-0.07514,0.72832,-0.11734,0.84624,0.05029,0.83121,-0.07399,0.74798,0.06705,0.78324,0.06358,0.86763,-0.02370,0.78844,-0.06012,0.74451,-0.02370,0.76717,-0.02731,0.74046,-0.07630,0.70058,-0.04220,0.78439,0.01214,1
1,0,0.35346,-0.13768,0.69387,-0.02423,0.68195,-0.03574,0.55717,-0.06119,0.61836,-0.10467,0.62099,-0.06527,0.59361,-0.07289,0.42271,-0.26409,0.58213,0.04992,0.49736,-0.08771,0.46241,-0.08989,0.45008,-0.00564,0.39146,-0.09038,0.35588,-0.10306,0.32232,-0.08637,0.28943,-0.08300,1
1,0,0.76046,0.01092,0.86335,0.00258,0.85821,0.00384,0.79988,0.02304,0.81504,0.12068,0.83096,0.00744,0.81815,0.00854,0.82777,-0.06974,0.76531,0.03881,0.76979,0.01148,0.75071,0.01232,0.77138,-0.00303,0.70886,0.01375,0.66161,0.00849,0.66298,0.01484,0.63887,0.01525,1
1,0,0.66667,-0.01366,0.97404,0.06831,0.49590,0.50137,0.75683,-0.00273,0.65164,-0.14071,0.40164,-0.48907,0.39208,0.58743,0.76776,0.31831,0.78552,0.11339,0.47541,-0.44945,1,0.00683,0.60656,0.06967,0.68656,0.17088,0.87568,0.07787,0.55328,0.24590,0.13934,0.48087,1
1,0,0.83508,0.08298,0.73739,-0.14706,0.84349,-0.05567,0.90441,-0.04622,0.89391,0.13130,0.81197,0.06723,0.79307,-0.08929,1,-0.02101,0.96639,0.06618,0.87605,0.01155,0.77521,0.06618,0.95378,-0.04202,0.83479,0.00123,1,0.12815,0.86660,-0.10714,0.90546,-0.04307,1
1,0,0.95113,0.00419,0.95183,-0.02723,0.93438,-0.01920,0.94590,0.01606,0.96510,0.03281,0.94171,0.07330,0.94625,-0.01326,0.97173,0.00140,0.94834,0.06038,0.92670,0.08412,0.93124,0.10087,0.94520,0.01361,0.93522,0.04925,0.93159,0.08168,0.94066,-0.00035,0.91483,0.04712,1
1,0,0.94701,-0.00034,0.93207,-0.03227,0.95177,-0.03431,0.95584,0.02446,0.94124,0.01766,0.92595,0.04688,0.93954,-0.01461,0.94837,0.02004,0.93784,0.01393,0.91406,0.07677,0.89470,0.06148,0.93988,0.03193,0.92489,0.02542,0.92120,0.02242,0.92459,0.00442,0.92697,-0.00577,1
1,0,0.90608,-0.01657,0.98122,-0.01989,0.95691,-0.03646,0.85746,0.00110,0.89724,-0.03315,0.89061,-0.01436,0.90608,-0.04530,0.91381,-0.00884,0.80773,-0.12928,0.88729,0.01215,0.92155,-0.02320,0.91050,-0.02099,0.89147,-0.07760,0.82983,-0.17238,0.96022,-0.03757,0.87403,-0.16243,1
1,0,0.84710,0.13533,0.73638,-0.06151,0.87873,0.08260,0.88928,-0.09139,0.78735,0.06678,0.80668,-0.00351,0.79262,-0.01054,0.85764,-0.04569,0.87170,-0.03515,0.81722,-0.09490,0.71002,0.04394,0.86467,-0.15114,0.81147,-0.04822,0.78207,-0.00703,0.75747,-0.06678,0.85764,-0.06151,1
surr_code/surr_code/data/make_batched.m0000640000175000017500000000150011361411756017113 0ustar iam23iam23function batched_data = make_batched(data, num_batches)
% function batched_data = make_batched(data, num_batches)
%
% Inputs:
% data DxN Note parity! Many people use NxD, I don't (here).
% num_batches 1x1
%
% Outputs:
% batched_data cel array of data split into batches, arrays of size Dx(N/num_batches +/- 1)
% Iain Murray, August 2007
% data = [...
% 101, 201, 301, 401, 501, 601;...
% 102, 202, 302, 402, 502, 602;...
% 103, 203, 303, 403, 503, 603;...
% 104, 204, 304, 404, 504, 604;...
% 105, 205, 305, 405, 505, 605];
[dim, num_cases] = size(data);
batch_size = floor(num_cases/num_batches);
batch_sizes = repmat(batch_size, num_batches, 1);
remainder = num_cases - batch_size*num_batches;
batch_sizes(1:remainder) = batch_sizes(1:remainder) + 1;
batched_data = mat2cell(data, dim, batch_sizes);
surr_code/surr_code/data/redwood.dat0000644000175000017500000000746211415646336016530 0ustar iam23iam230.9314815 0.9388889 0.9351852 0.9796296 0.787037 0.8425926 0.9388889 0.7351852 0.7314815 0.8314815 0.8333333 0.9314815 0.9388889 0.5981481 0.7833333 0.9074074 0.7259259 0.5944444 0.6611111 0.874074 0.8888889 0.6611111 0.8148148 0.587037 0.9796296 0.9666667 0.7259259 0.7888889 0.6537037 0.5981481 0.9425926 0.7259259 0.7851852 0.5462963 0.7092593 0.9037037 0.7796296 0.7777778 0.462963 0.4703704 0.7481481 0.8888889 0.5796296 0.9518519 0.6574074 0.6740741 0.7444444 0.7296296 0.7203704 0.6796296 0.5444444 0.9111111 0.8444444 0.4296296 0.4351852 0.5592593 0.9092593 0.9666667 0.4851852 0.8277778 0.7111111 0.8888889 0.6240741 0.887037 0.7611111 0.8981481 0.4111111 0.9722222 0.3574074 0.9537037 0.4592593 0.8277778 0.8314815 0.8481481 0.7111111 0.8666667 0.7111111 0.7185185 0.03888889 0.7166667 0.8 0.612963 0.7111111 0.7648148 0.7407407 0.7925926 0.1629630 0.7074074 0.7555556 0.2370370 0.7685185 0.6740741 0.6555556 0.7111111 0.1407407 0.6944444 0.6814815 0.1388889 0.6574074 0.6777778 0.1555556 0.3907407 0.4740741 0.3981481 0.2185185 0.1222222 0.2055556 0.2648148 0.2962963 0.3037037 0.3129630 0.25 0.637037 0.2722222 0.7314815 0.6259259 0.3111111 0.06296296 0.5944444 0.6185185 0.6018519 0.2222222 0.6277778 0.2351852 0.5981481 0.2814815 0.2185185 0.2481481 0.2314815 0.4759259 0.5388889 0.1981481 0.2777778 0.5314815 0.2055556 0.5203704 0.1851852 0.1777778 0.4037037 0.4759259 0.4074074 0.2333333 0.1870370 0.4111111 0.4314815 0.2203704 0.437037 0.2425926 0.3833333 0.2370370 0.3703704 0.4296296 0.3833333 0.4074074 0.4277778 0.3314815 0.3425926 0.3351852 0.3166667 0.3259259 0.2925926 0.3037037 0.3166667 0.08518519 0.07962963 0.0925926 0.1148148 0.1166667 0.09444444 0.1 0.08888889 0.08703704 0.2481481 0.05925926 0.04444444 0.07592593 0.2814815 0.2444444 0.05555556 0.2685185 0.2518519 0.2314815 0.1259259 0.2370370 0.2148148 0.2314815 0.1018519 0.2074074 0.2296296 0.1018519 0.1611111 0.1740741 0.1259259 0.1388889 0.1907407
0.8176796 0.7642726 0.7219153 0.664825 0.6611418 0.6445672 0.6224678 0.611418 0.5966851 0.5561694 0.5432781 0.5745856 0.5248619 0.4990792 0.4880295 0.4585635 0.4493554 0.4475138 0.4456722 0.4419890 0.4401473 0.4327808 0.3977901 0.3922652 0.3922652 0.3867403 0.373849 0.373849 0.3425414 0.3370166 0.3314917 0.3001842 0.3001842 0.2872928 0.2836096 0.2762431 0.2633517 0.2449355 0.2449355 0.2338858 0.2246777 0.2191529 0.2173112 0.2044199 0.2007366 0.1878453 0.1878453 0.1823204 0.1731123 0.1694291 0.1657459 0.1510129 0.1362799 0.1160221 0.1049724 0.1012891 0.09944751 0.09576427 0.09023941 0.0810313 0.07918969 0.07918969 0.07366483 0.06629834 0.06445672 0.05893186 0.053407 0.05156538 0.04972376 0.04051565 0.02946593 0.02762431 0.9889503 0.9871087 0.9834254 0.9797422 0.970534 0.9558011 0.946593 0.9429098 0.9300184 0.922652 0.9208103 0.9189687 0.9134438 0.9134438 0.9134438 0.907919 0.9042357 0.9023941 0.8987109 0.8913444 0.8876611 0.8876611 0.8876611 0.8858195 0.878453 0.8747698 0.8729282 0.8674033 0.8508287 0.8471455 0.8268877 0.7918969 0.7790055 0.7753223 0.771639 0.7697974 0.7605893 0.7532228 0.7440147 0.7440147 0.7348066 0.7348066 0.7311234 0.7219153 0.7163904 0.7127072 0.7127072 0.709024 0.7071823 0.7016575 0.6979742 0.6961326 0.6961326 0.6924494 0.6924494 0.6906077 0.6850829 0.679558 0.6519337 0.626151 0.6169429 0.6040516 0.6003683 0.5966851 0.5893186 0.5690608 0.5561694 0.5138122 0.4567219 0.4493554 0.4475138 0.4475138 0.4475138 0.4419890 0.4364641 0.4327808 0.427256 0.4217311 0.4198895 0.4143646 0.4106814 0.3996317 0.3959484 0.373849 0.3664825 0.3572744 0.3480663 0.3314917 0.3296501 0.3241252 0.320442 0.305709 0.2946593 0.2909761 0.2707182 0.2559853 0.2394107 0.2265193 0.2191529 0.2099448 0.2044199 0.2025783 0.2007366 0.2007366 0.1970534 0.1915285 0.1878453 0.1804788 0.1767956 0.160221 0.1362799 0.1215470 0.1215470 0.1104972 0.09944751 0.09576427 0.09208103 0.08839779 0.07550645 0.07550645 0.07366483 0.06629834 0.06629834
surr_code/surr_code/data/get_standardize_fns.m0000640000175000017500000000102011361656275020545 0ustar iam23iam23function [std_fn, destd_fn] = get_standardize_fns(xx)
%GET_STANDARDIZE_FNS get functions useful for simple scaling of data
%
% [std_fn, destd_fn] = get_standardize_fns(xx)
%
% Inputs:
% xx DxN
%
% Outputs:
% std_fn @fn zz = std_fn(xx) has zero mean and unit variance
% destd_fn @fn destd_fn(zz) == xx.
% Iain Murray, April 2010
x_sd = std(xx, [], 2);
x_mu = mean(xx, 2);
std_fn = @(z) bsxfun(@rdivide, bsxfun(@minus, z, x_mu), x_sd);
destd_fn = @(z) bsxfun(@plus, bsxfun(@times, z, x_sd), x_mu);
surr_code/surr_code/data/norminvcdf.m0000640000175000017500000000047211361661031016671 0ustar iam23iam23function xx = norminvcdf(uu, sigma, mu)
%NORMINVCDF Inverse Gaussian CDF without using the stats toolbox
%
% xx = norminvcdf(uu, sigma, mu)
% Iain Murray, May 2009, April 2010
if ~exist('sigma', 'var')
sigma = 1;
end
if ~exist('mu', 'var')
mu = 0;
end
xx = sqrt(2) * sigma .* erfinv(2*uu - 1) + mu;
surr_code/surr_code/data/mining.dat0000644000175000017500000000123111415646336016332 0ustar iam23iam23157
123
2
124
12
4
10
216
80
12
33
66
232
826
40
12
29
190
97
65
186
23
92
197
431
16
154
95
25
19
78
202
36
110
276
16
88
225
53
17
538
187
34
101
41
139
42
1
250
80
3
324
56
31
96
70
41
93
24
91
143
16
27
144
45
6
208
29
112
43
193
134
420
95
125
34
127
218
2
0
378
36
15
31
215
11
137
4
15
72
96
124
50
120
203
176
55
93
59
315
59
61
1
13
189
345
20
81
286
114
108
188
233
28
22
61
78
99
326
275
54
217
113
32
388
151
361
312
354
307
275
78
17
1205
644
467
871
48
123
456
498
49
131
182
255
194
224
566
462
228
806
517
1643
54
326
1312
348
745
217
120
275
20
66
292
4
368
307
336
19
329
330
312
536
145
75
364
37
19
156
47
129
1630
29
217
7
18
1358
2366
952
632
surr_code/surr_code/data/gp_play.m0000640000175000017500000000461511363413572016171 0ustar iam23iam23function gp_play()
% First stab. First classify into "fire" vs "no fire" and then regress on just
% the "fire" class. These tasks are probably related, but stuff that for now.
K = 10; % Cross-validation experiment with naive median predictor
num_runs = 30;
num_runs = 3;
[xx,yy] = read_forestfires();
N = size(xx, 2);
scores = zeros(num_runs, 1);
for run = 1:num_runs
fprintf('Run %d / %d\n', run, num_runs);
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
y_batches = make_batched(yy', K);
x_batches = make_batched(xx, K);
for bb = 1:K
fprintf('Fold %d / %d\n', bb, K);
train_x = cell2mat(x_batches([1:bb-1,bb+1:K]));
train_y = cell2mat(y_batches([1:bb-1,bb+1:K]))';
test_x = x_batches{bb};
test_y = y_batches{bb}';
pred = blah(train_x, train_y, test_x);
scores(run) = scores(run) + mean(abs(test_y-pred));
end
scores(run) = scores(run) / K;
end
fprintf('\n');
disp(errorbar_str(scores));
% >> naive_pred
% 12.8372 +/- 0.0046
%
% SVM got 12.71 +/- 0.01
%
% Without optimizing hypers I got: 12.8423 +/- 0.0072
%
% 20 steps for each hyper optimization: 12.819 +/- 0.026
%
% 3 runs with up to 100 steps: 12.814 +/- 0.015
%
% When run from gp_job with 100 steps: 12.8127 +/- 0.0041
function pred = blah(xx, yy, test_x)
use_dims = [9,10,11,12];
xx = xx(use_dims, :);
test_x = test_x(use_dims, :);
std_fn = get_standardize_fns(xx);
xx = std_fn(xx);
test_x = std_fn(test_x);
% Classification:
cc = (yy > min(yy))*2 - 1;
loghyper = [0.0; 0.0];
loghyper = minimize(loghyper, 'binaryEPGP', -100, 'covSEiso', xx', cc);
cpred = binaryEPGP(loghyper, 'covSEiso', xx', cc, test_x');
% Work out quantile of regression need to get median. If "no fire" has
% more than 0.5 probability then predict zero.
quantile = cpred - 0.5;
pred = zeros(size(test_x, 2), 1);
mask = (quantile > 0);
quantile = quantile(mask);
test_x = test_x(:, mask);
% Regression
idx = (yy > min(yy));
xx = xx(:, idx);
yy = log(yy(idx));
[std_fn, destd_fn] = get_standardize_fns(yy');
yy = std_fn(yy);
% GP regression:
covfunc = {'covSum', {'covSEiso','covNoise'}};
loghyper = [log(1.0); log(1.0); log(0.5)];
loghyper = minimize(loghyper, 'gpr', -100, covfunc, xx', yy);
[mu, S2] = gpr(loghyper, covfunc, xx', yy, test_x');
point_est = norminvcdf(quantile, sqrt(S2), mu);
% Get point_est that is required quantile through Gaussian prediction.
pred(mask) = exp(destd_fn(point_est));
surr_code/surr_code/data/naive_bootstrap2.m0000640000175000017500000000205211363414605020006 0ustar iam23iam23K = 10; % Cross-validation experiment with naive median predictor
num_runs = 30;
num_trials = 100;
[xx,yy] = read_forestfires();
N = size(xx, 2);
trial_scores = zeros(num_trials, 1);
for tt = 1:num_trials
fprintf('trial %d / %d\r', tt, num_trials);
scores = zeros(num_runs, 1);
for run = 1:num_runs
idx = randperm(N);
xx = xx(:, idx);
yy = yy(idx);
batches = make_batched(yy', K);
for bb = 1:K
yyb = cell2mat(batches([1:bb-1,bb+1:K]));
% Bootstrap resample:
Nb = length(yyb);
pred = median(yyb(ceil(rand(Nb, 1)*Nb)));
scores(run) = scores(run) + mean(abs(batches{bb}-pred));
end
scores(run) = scores(run) / K;
end
trial_scores(tt) = mean(scores);
end
fprintf('\n');
disp(errorbar_str(mean(trial_scores), std(trial_scores)));
disp(errorbar_str(trial_scores));
hist(trial_scores);
% >> naive_pred
% 12.8372 +/- 0.0046
%
% SVM got 12.71 +/- 0.01
%
% Cheating with naive_pred gets:
% >> mean(abs(yy-median(yy)))
% ans = 12.83
surr_code/surr_code/data/job0000750000175000017500000000042111361666010015041 0ustar iam23iam23#!/bin/sh
export MATLABPATH=/u/murray/mlut/matlab
cd /u/murray/mlut/09gp_hypers/gp_hypers/code/release/data
JOB=gp_job
if [ -e /pkgs/matlab-7.9/bin/matlab ] ; then
/pkgs/matlab-7.9/bin/matlab -nodisplay -r $JOB
else
/pkgs/matlab/bin/matlab -nodisplay -r $JOB
fi
surr_code/surr_code/data/gp_job_results.m0000640000175000017500000000012111363407446017546 0ustar iam23iam23load 'results/gp_play';
disp(errorbar_str([results.score]))
% 12.8127 +/- 0.0041
surr_code/surr_code/get_redwood_data.m0000644000175000017500000000065111415646336017124 0ustar iam23iam23function [X Y] = get_redwood_data(num_bins)
locs = load('data/redwood.dat')';
edges = linspace(0, 1, num_bins+1);
centers = (edges(1:end-1) + edges(2:end))/2;
[tmp xbins] = histc(locs(:,1), edges);
[tmp ybins] = histc(locs(:,2), edges);
[x1 x2] = meshgrid(centers);
X = [x1(:) x2(:)];
Y = zeros(num_bins, num_bins);
for n=1:length(xbins)
Y(xbins(n),ybins(n)) = Y(xbins(n),ybins(n)) +1;
end
Y = Y(:);
endsurr_code/surr_code/run_mine_simple.m0000644000175000017500000000511211415646336017013 0ustar iam23iam23function run_mine_simple()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_simple';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta chol_cov] = update_theta_simple(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, theta_log_prior, slice_width, chol_cov);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_mine_surr_taylor.m0000644000175000017500000000523011415646336020110 0ustar iam23iam23function run_mine_surr_taylor()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_surr_taylor';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_surr(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta,K) aux_taylor_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/add_call_counter.m0000644000175000017500000000513211415646336017112 0ustar iam23iam23function fn2 = add_call_counter(fn, varargin)
%ADD_CALL_COUNTER wrap a function so that calls to it are counted
%
% fn2 = add_call_counter(fn, varargin);
%
% Now fn2 will behave exactly like fn, unless its arguments are exactly the same
% as varargin, in which case a call count will be returned and the counter will
% be reset. Set varargin to the simplest set of arguments that can never be
% passed to fn().
%
% Example:
% fn2 = add_call_counter(fn, {});
% ans1 = fn2(arg1, arg2);
% ans2 = fn2(arg1, arg2);
% num_calls = fn2({}); % num_calls == 2, counter is reset
% ans3 = fn2(arg1, arg2);
% num_calls = num_calls + fn2({}); % num_calls == 3, counter reset again.
%
% Inputs:
% fn @fn handle to function that needs wrapping
% varargin ? if fn2 is called with exactly this set of arguments
% (can be empty) then instead of calling fn, it returns the
% number of calls since the last reset & resets the counter
%
% Outputs:
% fn2 @fn function that behaves just like fn unless its input
% arguments are varargin. Then a call count is reported and
% the counter reset.
% Iain Murray, August 2009
% NOTE: this wasn't written with a huge number of counters in mind. The
% mechanism that allows multiple counters doesn't scale well as Matlab doesn't
% seem to use a hash lookup for its structure fields. Also, if fn2's counter
% isn't reset before fn2 goes out of scope, then memory will be leaked. Fancy
% Matlab handle class stuff could be used to avoid this, but the current version
% fits my needs and I wanted to write something that would work in Octave too.
% Set up unique identifier for this counter
persistent next_id
if isempty(next_id)
next_id = 0;
end
id = sprintf('a%d', next_id);
next_id = next_id + 1;
% Create wrapped function
flag_args = varargin;
fn2 = @(varargin) call_counter_helper(id, fn, flag_args, varargin);
function varargout = call_counter_helper(id, fn, flag_args, fn_args)
persistent counter
if isempty(counter)
counter = struct();
end
if isequal(fn_args, flag_args)
% Special set of arguments, return call count and flush counter
if isfield(counter, id)
varargout{1} = counter.(id);
counter = rmfield(counter, id);
else
varargout{1} = 0;
end
else
% Count the function call
if isfield(counter, id)
counter.(id) = counter.(id) + 1;
else
counter.(id) = 1;
end
% And behave like the original function
varargout = cell(1, max(1, nargout));
[varargout{:}] = fn(fn_args{:});
end
surr_code/surr_code/run_mine_chol.m0000644000175000017500000000510111415646336016445 0ustar iam23iam23function run_mine_chol()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_chol';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff chol_cov] = update_theta_aux_chol(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/update_theta_aux_fixed.m0000644000175000017500000001146411415646336020340 0ustar iam23iam23function [theta, ff, aux, cholK] = update_theta_aux_fixed(theta, ff, Lfn, Kfn, aux, theta_Lprior, slice_width)
%UPDATE_THETA_AUX_FIXED MCMC update to GP hyper-param based on deterministic reparameterization
%
% [theta, ff] = update_theta_aux_noise(theta, ff, Lfn, Kfn, aux, theta_Lprior);
%
% Inputs:
% theta Kx1 hyper-parameters (can be an array of any size)
% ff Nx1 apriori Gaussian values
% Lfn @fn Log-likelihood function, Lfn(ff) returns a scalar
% Kfn @fn K = Kfn(theta) returns NxN covariance matrix
% NB: this should contain jitter (if necessary) to
% ensure the result is positive definite.
%
% Specify aux in one of three ways:
% ------------------------------------------
% [aux_std, gg] Nx2 column1: std-dev of auxiliary noise (can also be a 1x1).
% column2: point estimates of latent values
% OR function:
% aux_fn @fn [aux_std, gg] = aux_fn(theta, K);
% OR caching function:
% aux cel A pair: {aux_fn, aux_cache}, used like this:
% [aux_std, gg, aux_cache] = aux_fn(theta, K, aux_cache);
% The cache could be used (for example) to notice that
% relevant parts of theta or K haven't changed, and
% to immediately return the previously computed values.
% ---------------------------------
%
% theta_Lprior @fn Log-prior, theta_Lprior(theta) returns a scalar
%
% Outputs:
% theta Kx1 updated hyper-parameters (Kx1 or same size as inputted)
% ff Nx1 updated apriori Gaussian values
% aux - Last [aux_std, gg] computed, or {aux_std_fn, aux_cache},
% depending on what was passed in.
% cholK NxN chol(Kfn(theta))
%
% The model is ff ~ N(0, K), log(p(observations|ff)) = Lfn(ff)
% Reparameterize by "whitening" under the pseudo-posterior with
% pseudo-observations gg with Gaussian noise with width(s) aux_std.
% Iain Murray, November 2009, January 2010, April 2010
% If there is a good reason for it, there's no real reason full-covariance
% auxiliary noise couldn't be used. It would just be more expensive as sampling
% would require decomposing the noise covariance matrix. For now this code
% hasn't implemented that option.
N = numel(ff);
% Start constructing the struct that will be passed around while slicing
pp = struct('pos', theta, 'Kfn', Kfn);
if isnumeric(aux)
% Fixed auxiliary noise level
pp.adapt_aux = 0;
pp.aux_std = aux(:,1);
pp.aux_var = aux(:,1).*aux(:,1);
pp.gg = aux(:,2);
pp.Sinv_g = pp.gg ./ pp.aux_var;
elseif iscell(aux)
% Adapting noise level, with computations cached
pp.adapt_aux = 2;
pp.aux_fn = aux{1};
pp.aux_cache = aux{2};
else
% Simple function to choose noise level
pp.adapt_aux = 1;
pp.aux_fn = aux;
end
pp = theta_changed(pp);
% Instantiate nu|f,gg
pp.nu = pp.U_invR*ff(:) - pp.U_invR'\pp.Sinv_g;
% Compute current log-prob (up to constant) needed by slice sampling:
theta_unchanged = true; % theta hasn't moved yet, don't recompute covariances
pp = eval_particle(pp, -Inf, Lfn, theta_Lprior, theta_unchanged);
% Slice sample update of theta|g,nu
step_out = (slice_width > 0);
slice_width = abs(slice_width);
slice_fn = @(pp, Lpstar_min) eval_particle(pp, Lpstar_min, Lfn, theta_Lprior);
pp = slice_sweep(pp, slice_fn, slice_width, step_out);
theta = pp.pos;
ff = reshape(pp.ff, size(ff));
% Return some cached values
if iscell(aux)
aux = {pp.aux_fn, pp.aux_cache};
else
aux = [pp.aux_std.*ones(size(pp.gg)), pp.gg];
end
cholK = pp.U;
function pp = theta_changed(pp)
% Will call after changing hyperparameters to update covariances and
% their decompositions.
theta = pp.pos;
K = pp.Kfn(theta);
if pp.adapt_aux
if pp.adapt_aux == 1
[pp.aux_std, pp.gg] = pp.aux_fn(theta, K);
elseif pp.adapt_aux == 2
[pp.aux_std, pp.gg, pp.aux_cache] = pp.aux_fn(theta, K, pp.aux_cache);
end
pp.aux_var = pp.aux_std .* pp.aux_std;
pp.Sinv_g = pp.gg ./ pp.aux_var;
end
pp.U = chol(K);
pp.iK = inv(K);
pp.U_invR = chol(plus_diag(pp.iK, 1./pp.aux_var));
function pp = eval_particle(pp, Lpstar_min, Lfn, theta_Lprior, theta_unchanged)
% Prior on theta
theta = pp.pos;
Ltprior = theta_Lprior(theta);
if Ltprior == -Inf
pp.on_slice = false;
return;
end
if ~exist('theta_unchanged', 'var') || (~theta_unchanged)
pp = theta_changed(pp);
end
% Update f|gg,nu,theta
pp.ff = pp.U_invR\pp.nu + solve_chol(pp.U_invR, pp.Sinv_g);
% Compute joint probability and slice acceptability.
% I have dropped the constant: -0.5*length(pp.ff)*log(2*pi)
Lfprior = -0.5*(pp.ff'*solve_chol(pp.U, pp.ff)) - sum(log(diag(pp.U)));
LJacobian = -sum(log(diag(pp.U_invR)));
pp.Lpstar = Ltprior + Lfprior + Lfn(pp.ff) + LJacobian;
pp.on_slice = (pp.Lpstar >= Lpstar_min);
surr_code/surr_code/run_redwood_chol.m0000644000175000017500000000512611415646336017167 0ustar iam23iam23function run_redwood_chol()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_chol';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff chol_cov] = update_theta_aux_chol(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_gaussian_simple.m0000644000175000017500000000420711415646336017701 0ustar iam23iam23function run_gaussian_simple()
addpath('gpml');
experiment_setup()
setup = setup_gaussian();
name = 'gaussian_simple';
fn = @(run) gaussian_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = gaussian_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
cur_llh = counting_llh(ff);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta chol_cov] = update_theta_simple(theta, ff, @(x) counting_llh(x), ...
counting_cov, theta_log_prior, slice_width, chol_cov);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x));
end
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) ...
- sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_redwood_fixed_noise.m0000644000175000017500000000526011415646336020535 0ustar iam23iam23function run_redwood_fixed_noise()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_fixed_noise';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_fixed(theta, ff, ...
@(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta, K) aux_noise_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/gppu_elliptical.m0000644000175000017500000000476611415646336017021 0ustar iam23iam23function [xx, cur_log_like] = gppu_elliptical(xx, chol_Sigma, log_like_fn, cur_log_like, angle_range)
%GPPU_ELLIPTICAL Gaussian prior posterior update - slice sample on random ellipses
%
% [xx, cur_log_like] = gppu_elliptical(xx, chol_Sigma, log_like_fn[, cur_log_like])
%
% A Dx1 vector xx with prior N(0,Sigma) is updated leaving the posterior
% distribution invariant.
%
% Inputs:
% xx Dx1 initial vector (can be any array with D elements)
% chol_Sigma DxD chol(Sigma). Sigma is the prior covariance of xx
% log_like_fn @fn log_like_fn(xx) returns 1x1 log likelihood
% cur_log_like 1x1 Optional: log_like_fn(xx) of initial vector.
% You can omit this argument or pass [].
% angle_range 1x1 Default 0: explore whole ellipse with break point at
% first rejection. Set in (0,2*pi] to explore a bracket of
% the specified width centred uniformly at randomly.
%
% Outputs:
% xx Dx1 (size matches input) perturbed vector
% cur_log_like 1x1 log_like_fn(xx) of final vector
%
% See also: GPPU_UNDERRELAX, GPPU_LINESLICE, GPPU_SPLITSLICE
% Iain Murray, September 2009
D = numel(xx);
assert(isequal(size(chol_Sigma), [D D]));
if ~exist('angle_range', 'var')
angle_range = 0;
end
if ~exist('cur_log_like', 'var') || isempty(cur_log_like)
cur_log_like = log_like_fn(xx);
end
% Set up the ellipse and the slice threshold
nu = reshape(chol_Sigma'*randn(D, 1), size(xx));
hh = log(rand) + cur_log_like;
% Set up a bracket of angles and pick a first proposal.
% "phi = (theta'-theta)" is a change in angle.
if angle_range <= 0
% Bracket whole ellipse with both edges at first proposed point
phi = rand*2*pi;
phi_min = phi - 2*pi;
phi_max = phi;
else
% Randomly center bracket on current point
phi_min = -angle_range*rand;
phi_max = phi_min + angle_range;
phi = rand*(phi_max - phi_min) + phi_min;
end
% Slice sampling loop
while true
% Compute xx for proposed angle difference and check if it's on the slice
xx_prop = xx*cos(phi) + nu*sin(phi);
cur_log_like = log_like_fn(xx_prop);
if cur_log_like > hh
% New point is on slice, ** EXIT LOOP **
break;
end
% Shrink slice to rejected point
if phi > 0
phi_max = phi;
elseif phi < 0
phi_min = phi;
else
error('BUG DETECTED: Shrunk to current position and still not acceptable.');
end
% Propose new angle difference
phi = rand*(phi_max - phi_min) + phi_min;
end
xx = xx_prop;
surr_code/surr_code/run_mine_surr_noise.m0000644000175000017500000000522411415646336017716 0ustar iam23iam23function run_mine_surr_noise()
addpath('gpml');
experiment_setup()
setup = setup_mine();
name = 'mine_surr_noise';
fn = @(run) mine_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = mine_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand(D)*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_surr(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta,K) aux_noise_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/DEFAULT.m0000644000175000017500000000073111415646336014714 0ustar iam23iam23function DEFAULT(var_name, value);
%DEFAULT sets a variable to a default value if undefined or empty
%
% DEFAULT('num_iters', 42);
%
% num_iters will have its previous value, or if it wasn't defined or was empty,
% num_iters will now be equal to 42.
%
% Inputs:
% var_name string
% value whatever
% Iain Murray, September 2009
if evalin('caller', ['~exist(''' var_name ''', ''var'') || isempty(' var_name ')']);
assignin('caller', var_name, value)
end
surr_code/surr_code/run_ionosphere_fixed_noise.m0000644000175000017500000000512511415646336021245 0ustar iam23iam23function run_ionosphere_fixed_noise()
addpath('gpml');
experiment_setup()
setup = setup_ionosphere();
name = 'ionosphere_fixed_noise';
fn = @(run) ionosphere_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = ionosphere_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(train_x);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
cur_llh = counting_llh(ff, gain);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
if ii > 0
fprintf('%03d/%03d] Iter %05d / %05d Train Error: %0.2f \n', run, runs, ...
ii, iterations, train_error_fn(mean(ff_samples(1:ii,:),1)'));
else
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
end
[theta ff aux chol_cov] = update_theta_aux_fixed(theta, ff, @(x) counting_llh(x, gain), ...
counting_cov, ...
@(theta, K) aux_noise_fn(theta, K, gain), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain));
end
[gain cur_llh] = update_gain(gain, ff, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/update_theta_aux_chol.m0000644000175000017500000000236011415646336020161 0ustar iam23iam23function [theta, ff, U] = update_theta_aux_chol(theta, ff, Lfn, Kfn, theta_Lprior, slice_width, U)
%UPDATE_THETA_AUX_CHOL MCMC update to GP hyperparam. Fixes nu used to draw f, rather than f itself
%
% Lfn @fn Log-likelihood function, Lfn(ff) returns a scalar
% Iain Murray, November 2009
Ufn = @(th) chol(Kfn(th));
DEFAULT('theta_Lprior', @(l) log(double((l>log(0.1)) && (l 0);
slice_width = abs(slice_width);
slice_fn = @(pp, Lpstar_min) eval_particle(pp, Lpstar_min, nu, Lfn, theta_Lprior, Ufn);
particle = slice_sweep(particle, slice_fn, slice_width, step_out);
theta = particle.pos;
ff = particle.ff;
U = particle.U;
function pp = eval_particle(pp, Lpstar_min, nu, Lfn, theta_Lprior, U)
% U is a precomputed chol(Kfn(pp.pos)) or a function that will compute it
% Prior
theta = pp.pos;
Ltprior = theta_Lprior(theta);
if Ltprior == -Inf
pp.on_slice = false;
return;
end
if ~isnumeric(U)
U = U(theta);
end
ff = (nu'*U)';
pp.Lpstar = Ltprior + Lfn(ff);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
pp.U = U;
pp.ff = ff;
surr_code/surr_code/get_synthetic_data.m0000644000175000017500000000027411415646336017474 0ustar iam23iam23function [train_xx train_yy noise_var] = get_synthetic_data()
synth = load('data/synthetic.mat');
train_xx = synth.data.X;
train_yy = synth.data.Y;
noise_var = synth.noise_variance;
endsurr_code/surr_code/run_ionosphere_simple.m0000644000175000017500000000501311415646336020236 0ustar iam23iam23function run_ionosphere_simple()
addpath('gpml');
experiment_setup()
setup = setup_ionosphere();
name = 'ionosphere_simple';
fn = @(run) ionosphere_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = ionosphere_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(train_x);
theta = zeros([D 1]);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
cur_llh = counting_llh(ff, gain);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 1) == 0
if ii > 0
fprintf('%03d/%03d] Iter %05d / %05d Train Error: %0.2f \n', run, runs, ...
ii, iterations, train_error_fn(mean(ff_samples(1:ii,:),1)'));
else
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
end
[theta chol_cov] = update_theta_simple(theta, ff, @(x) counting_llh(x, gain), counting_cov, theta_log_prior, slice_width, chol_cov);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain));
end
[gain cur_llh] = update_gain(gain, ff, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/update_theta_aux_surr.m0000644000175000017500000001247011415646336020232 0ustar iam23iam23function [theta, ff, aux, cholK] = update_theta_aux_surr(theta, ff, Lfn, Kfn, aux, theta_Lprior, slice_width)
%UPDATE_THETA_AUX_SURR MCMC update to GP hyper-param based on aux. noisy vars
%
% [theta, ff] = update_theta_aux_noise(theta, ff, Lfn, Kfn, aux, theta_Lprior);
%
% Inputs:
% theta Kx1 hyper-parameters (can be an array of any size)
% ff Nx1 apriori Gaussian values
% Lfn @fn Log-likelihood function, Lfn(ff) returns a scalar
% Kfn @fn Kfn(theta) returns NxN covariance matrix
% NB: this should contain jitter (if necessary) to
% ensure the result is positive definite.
%
% Specify aux in one of three ways:
% ---------------------------------
% aux_std Nx1 std-dev of auxiliary noise to add to each value
% (can also be a 1x1).
% OR
% aux_std_fn @fn Function that returns auxiliary noise level(s) to use:
% aux_std = aux_std_fn(theta, K);
% OR
% aux cel A pair: {aux_std_fn, aux_cache} called like this:
% [aux_std, aux_cache] = aux_std_fn(theta, K, aux_cache);
% The cache could be used (for example) to notice that
% relevant parts of theta or K haven't changed, and
% immediately returning the old aux_std.
% ---------------------------------
%
% theta_Lprior @fn Log-prior, theta_Lprior(theta) returns a scalar
%
% Outputs:
% theta Kx1 updated hyper-parameters (Kx1 or same size as inputted)
% ff Nx1 updated apriori Gaussian values
% aux - Last aux_std computed, or {aux_std_fn, aux_cache},
% depending on what was passed in.
% cholK NxN chol(Kfn(theta))
%
% The model is draw g ~ N(0, K + S), (imagine as f ~ N(0, K) + noise with cov S)
% Draw f ~ N(m_p, C_p), using posterior mean and covariance given g.
% But implement that using nu ~ randn(N,1). Then clamp nu's while changing K.
%
% K is obtained from Kfn.
% S = diag(aux_std.^2), or for scalar aux_std (aux_std^2 * eye(N)).
% Iain Murray, November 2009, January 2010, May 2010
% If there is a good reason for it, there's no real reason full-covariance
% auxiliary noise couldn't be added. It would just be more expensive as sampling
% would require decomposing the noise covariance matrix. For now this code
% hasn't implemented that option.
N = numel(ff);
% Start constructing the struct that will be passed around while slicing
pp = struct('pos', theta, 'Kfn', Kfn);
if isnumeric(aux)
% Fixed auxiliary noise level
pp.adapt_aux = 0;
pp.aux_std = aux;
pp.aux_var = aux.*aux;
elseif iscell(aux)
% Adapting noise level, with computations cached
pp.adapt_aux = 2;
pp.aux_fn = aux{1};
pp.aux_cache = aux{2};
else
% Simple function to choose noise level
pp.adapt_aux = 1;
pp.aux_fn = aux;
end
pp.gg = zeros(N, 1);
pp = theta_changed(pp);
% Instantiate g|f
pp.gg = ff(:) + randn(N, 1).*pp.aux_std;
pp.Sinv_g = pp.gg ./ pp.aux_var;
% Instantiate nu|f,gg
pp.nu = pp.U_invR*ff(:) - pp.U_invR'\pp.Sinv_g;
% Compute current log-prob (up to constant) needed by slice sampling:
theta_unchanged = true; % theta hasn't moved yet, don't recompute covariances
pp = eval_particle(pp, -Inf, Lfn, theta_Lprior, theta_unchanged);
% Slice sample update of theta|g,nu
step_out = (slice_width > 0);
slice_width = abs(slice_width);
slice_fn = @(pp, Lpstar_min) eval_particle(pp, Lpstar_min, Lfn, theta_Lprior);
pp = slice_sweep(pp, slice_fn, slice_width, step_out);
theta = pp.pos;
ff = reshape(pp.ff, size(ff));
if iscell(aux)
aux = {pp.aux_fn, pp.aux_cache};
else
aux = pp.aux_std;
end
cholK = pp.U;
function pp = theta_changed(pp)
% Will call after changing hyperparameters to update covariances and
% their decompositions.
theta = pp.pos;
K = pp.Kfn(theta);
if pp.adapt_aux
if pp.adapt_aux == 1
pp.aux_std = pp.aux_fn(theta, K);
elseif pp.adapt_aux == 2
[pp.aux_std, pp.aux_cache] = pp.aux_fn(theta, K, pp.aux_cache);
end
pp.aux_var = pp.aux_std .* pp.aux_std;
pp.Sinv_g = pp.gg ./ pp.aux_var;
end
pp.U = chol(K);
pp.iK = inv(K);
pp.U_invR = chol(plus_diag(pp.iK, 1./pp.aux_var));
%pp.U_noise = chol(plus_diag(K, aux_var_vec));
function pp = eval_particle(pp, Lpstar_min, Lfn, theta_Lprior, theta_unchanged)
% Prior on theta
theta = pp.pos;
Ltprior = theta_Lprior(theta);
if Ltprior == -Inf
pp.on_slice = false;
return;
end
if ~exist('theta_unchanged', 'var') || (~theta_unchanged)
pp = theta_changed(pp);
end
% Update f|gg,nu,theta
pp.ff = pp.U_invR\pp.nu + solve_chol(pp.U_invR, pp.Sinv_g);
% Compute joint probability and slice acceptability.
% I have dropped the constant: -0.5*length(pp.gg)*log(2*pi)
%Lgprior = -0.5*(pp.gg'*solve_chol(pp.U_noise, pp.gg)) - sum(log(diag(pp.U_noise)));
%pp.Lpstar = Ltprior + Lgprior + Lfn(pp.ff);
%
% This version doesn't need U_noise, but commenting out the U_noise line and
% using this version doesn't actually seem to be faster?
Lfprior = -0.5*(pp.ff'*solve_chol(pp.U, pp.ff)) - sum(log(diag(pp.U)));
LJacobian = -sum(log(diag(pp.U_invR)));
%LJacobian = sum(log(diag(pp.U_R)));
Lg_f = -0.5*sum((pp.gg - pp.ff).^2)./pp.aux_var - sum(log(pp.aux_var.*ones(size(pp.ff))));
pp.Lpstar = Ltprior + Lg_f + Lfprior + Lfn(pp.ff) + LJacobian;
pp.on_slice = (pp.Lpstar >= Lpstar_min);
surr_code/surr_code/dsxy2figxy.m0000644000175000017500000000473111415646336015754 0ustar iam23iam23function varargout = dsxy2figxy(varargin)
% dsxy2figxy -- Transform point or position from data space
% coordinates into normalized figure coordinates
% Transforms [x y] or [x y width height] vectors from data space
% coordinates to normalized figure coordinates in order to locate
% annotation objects within a figure. These objects are: arrow,
% doublearrow, textarrow, ellipse line, rectangle, textbox
%
% Syntax:
% [figx figy] = dsxy2figxy([x1 y1],[x2 y2]) % GCA is used
% figpos = dsxy2figxy([x1 y1 width height])
% [figx figy] = dsxy2figxy(axes_handle, [x1 y1],[x2 y2])
% figpos = dsxy2figxy(axes_handle, [x1 y1 width height])
%
% Usage: Obtain a position on a plot in data space and
% apply this function to locate an annotation there, e.g.,
% [axx axy] = ginput(2); (input is in data space)
% [figx figy] = dsxy2figxy(gca, axx, axy); (now in figure space)
% har = annotation('textarrow',figx,figy);
% set(har,'String',['(' num2str(axx(2)) ',' num2str(axy(2)) ')'])
%
% Copyright 2006-2009 The MathWorks, Inc.
% Obtain arguments (limited argument checking is done)
% Determine if axes handle is specified
if length(varargin{1}) == 1 && ishandle(varargin{1}) ...
&& strcmp(get(varargin{1},'type'),'axes')
hAx = varargin{1};
varargin = varargin(2:end); % Remove arg 1 (axes handle)
else
hAx = gca;
end;
% Remaining args are either two point locations or a position vector
if length(varargin) == 1 % Assume a 4-element position vector
pos = varargin{1};
else
[x,y] = deal(varargin{:}); % Assume two pairs (start, end points)
end
% Get limits
axun = get(hAx,'Units');
set(hAx,'Units','normalized'); % Make axes units normalized
axpos = get(hAx,'Position'); % Get axes position
axlim = axis(hAx); % Get the axis limits [xlim ylim (zlim)]
axwidth = diff(axlim(1:2));
axheight = diff(axlim(3:4));
% Transform from data space coordinates to normalized figure coordinates
if exist('x','var') % Transform a and return pair of points
varargout{1} = (x - axlim(1)) * axpos(3) / axwidth + axpos(1);
varargout{2} = (y - axlim(3)) * axpos(4) / axheight + axpos(2);
else % Transform and return a position rectangle
pos(1) = (pos(1) - axlim(1)) / axwidth * axpos(3) + axpos(1);
pos(2) = (pos(2) - axlim(3)) / axheight * axpos(4) + axpos(2);
pos(3) = pos(3) * axpos(3) / axwidth;
pos(4) = pos(4) * axpos(4 )/ axheight;
varargout{1} = pos;
end
% Restore axes units
set(hAx,'Units',axun)surr_code/surr_code/setup_mine.m0000644000175000017500000000545311415646336016006 0ustar iam23iam23function setup = setup_mine()
setup.bin_width = 365;
[setup.X setup.Y] = get_mine_data(setup.bin_width);
setup.X = setup.X(:);
setup.Y = setup.Y(:);
setup.runs = 10;
setup.iterations = 20000;
setup.burn = 1000;
setup.ess_iterations = 10;
setup.max_ls = 100000.0; % Days
setup.min_ls = 5000.0;
setup.max_gain = 10.0;
setup.min_gain = 1e-5;
setup.min_gpmean = -10;
setup.max_gpmean = 10;
setup.max_aux_std = 200;
jitter = 1e-6;
gpml_covs = {'covSum', {'covSEard', 'covNoise'}};
setup.slice_width = 10;
setup.llh_fn = @mine_llh;
setup.cov_fn = @(theta) feval(gpml_covs{:}, [theta ; 0 ; log(jitter)], setup.X);
setup.theta_log_prior = @(theta) log(1.0*all((theta>log(setup.min_ls)) & (theta setup.max_aux_std) = setup.max_aux_std;
gg = (gg-gp_mean)/gain;
end
function [std gg] = aux_taylor(theta, K, gain, gp_mean)
[std gg] = poiss_aux_fixed(setup.Y);
std = std/gain;
std(std > setup.max_aux_std) = setup.max_aux_std;
gg = (gg-gp_mean)/gain;
end
function [gain llh] = update_gain(gain, ff, cur_mean, cur_llh)
% Slice sample
particle = struct('pos', gain, 'ff', ff, 'mean', cur_mean);
particle = gain_slice_fn(particle, -Inf);
particle = slice_sweep(particle, @gain_slice_fn, 1, 0);
gain = particle.pos;
llh = particle.Lpstar;
end
function [new_mean llh] = update_mean(cur_mean, ff, gain, cur_llh)
% Slice sample
particle = struct('pos', cur_mean, 'ff', ff, 'gain', gain);
particle = mean_slice_fn(particle, -Inf);
particle = slice_sweep(particle, @mean_slice_fn, 1, 0);
new_mean = particle.pos;
llh = particle.Lpstar;
end
function pp = gain_slice_fn(pp, Lpstar_min)
gain = pp.pos;
if (gain < setup.min_gain) || (gain > setup.max_gain)
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
pp.Lpstar = mine_llh(pp.ff, gain, pp.mean);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
end
function pp = mean_slice_fn(pp, Lpstar_min)
new_mean = pp.pos;
if (new_mean < setup.min_gpmean) || (new_mean > setup.max_gpmean)
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
pp.Lpstar = mine_llh(pp.ff, pp.gain, new_mean);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
end
endsurr_code/surr_code/run_redwood_surr_noise.m0000644000175000017500000000525711415646336020437 0ustar iam23iam23function run_redwood_surr_noise()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_surr_noise';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta ff aux chol_cov] = update_theta_aux_surr(theta, ff, ...
@(x) counting_llh(x, gain, gp_mean), ...
counting_cov, ...
@(theta, K) aux_noise_fn(theta, K, gain, gp_mean), ...
theta_log_prior, slice_width);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/run_redwood_simple.m0000644000175000017500000000513711415646336017535 0ustar iam23iam23function run_redwood_simple()
addpath('gpml');
experiment_setup()
setup = setup_redwood();
name = 'redwood_simple';
fn = @(run) redwood_run(setup, run);
success = experiment_run(name, setup.runs, fn, true);
function results = redwood_run(setup, run)
UNPACK_STRUCT(setup);
counting_llh = add_call_counter(llh_fn, {});
counting_cov = add_call_counter(cov_fn, {});
tic;
[N D] = size(X);
theta = log(rand([D 1])*(max_ls-min_ls) + min_ls);
chol_cov = chol(counting_cov(theta));
ff = chol_cov' * randn([N 1]);
gain = 1;
gp_mean = log(mean(Y));
cur_llh = counting_llh(ff, gain, gp_mean);
ff_samples = zeros([iterations N]);
theta_samples = zeros([iterations D]);
gain_samples = zeros([iterations 1]);
mean_samples = zeros([iterations 1]);
cond_llh_samples = zeros([iterations 1]);
comp_llh_samples = zeros([iterations 1]);
num_llh_calls = zeros([iterations+burn 1]);
num_cov_calls = zeros([iterations+burn 1]);
for ii = (1-burn):iterations
if mod(ii, 10) == 0
fprintf('%03d/%03d] Iter %05d / %05d\n', run, runs, ii, iterations);
end
[theta chol_cov] = update_theta_simple(theta, ff, @(x) counting_llh(x, gain, gp_mean), ...
counting_cov, theta_log_prior, slice_width, chol_cov);
for jj = 1:ess_iterations
[ff cur_llh] = gppu_elliptical(ff, chol_cov, @(x) counting_llh(x, gain, gp_mean));
end
[gain cur_llh] = update_gain(gain, ff, gp_mean, cur_llh);
[gp_mean cur_llh] = update_mean(gp_mean, ff, gain, cur_llh);
num_llh_calls(ii+burn) = counting_llh({});
num_cov_calls(ii+burn) = counting_cov({});
if ii > 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
gain_samples(ii) = gain;
mean_samples(ii) = gp_mean;
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) - sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.gain_samples = gain_samples;
results.mean_samples = mean_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/plus_diag.m0000644000175000017500000000130311415646336015573 0ustar iam23iam23function X = plus_diag(X, y)
%PLUS_DIAG add a scalar or vector onto the diagonal of a matrix
%
% X = plus_diag(X, y)
%
% For vector y: X = X + diag(y);
% For scalar y: X = X + diag(repmat(y, length(X), 1));
% (although more efficient code is used)
%
% Inputs:
% X NxN
% y Nx1, 1xN or 1x1
%
% Outputs:
% X NxN
%
% Note: in older version of Matlab and Octave this function never adds y to X
% in place. As long as plus_diag is called from a function (not a script or the
% command-line) Matlab >= R2007a will update X in place when it can.
%
% Iain Murray, June 2006, July 2008.
[N, M] = size(X);
if N~=M, error('X must be square'), end
diagidx = (0:N-1)*N + (1:N);
X(diagidx) = X(diagidx) + y(:)';
surr_code/surr_code/poiss_aux.m0000644000175000017500000000363411415646336015647 0ustar iam23iam23function [aux_std, gg] = poiss_aux(counts, log_prior_var, prior_mean)
%POISS_AUX return effective Gaussian likelihood noise level (and centres)
%
% [aux_std, gg] = poiss_aux(counts, log_prior_var, prior_mean)
%
% Inputs:
% counts Nx1
% log_prior_var 1x1 or Nx1
% prior_mean 1x1 or Nx1
%
% Outputs:
% aux_std Nx1
% gg Nx1 (if needed)
% Iain Murray, April 2010
prior_var = exp(log_prior_var);
prior_precision = 1./prior_var;
% % Straight-up Laplace approx:
% mu = counts.*prior_var + prior_mean - lambertw_approx(prior_var.*exp(prior_mean + counts.*prior_var));
% post_var = sqrt(prior_var ./ (1 + exp(mu).*prior_var));
% Nearly the same as Laplace and cheaper
idx = (counts > 0);
mu = zeros(size(counts));
mu(idx) = (msk(prior_var, idx).*counts(idx).*log(counts(idx)) + msk(prior_mean, idx)) ...
./ (1 + counts(idx).*msk(prior_var, idx));
% TODO could come up with a cheap proxy for zero counts as well. Currently do Laplace:
mu(~idx) = msk(prior_mean, ~idx) - lambertw_approx(msk(prior_var, ~idx).*exp(msk(prior_mean, ~idx)));
% Just need to be in the right ball-park for the variance
%mu(~idx) = log(0.5) + msk(prior_mean, ~idx); % Ok for small prior_mean, but not for big
post_var = prior_var ./ (1 + exp(mu).*prior_var);
post_precision = 1./post_var;
mask = (post_precision > prior_precision);
aux_std = zeros(size(mask));
aux_std(mask) = sqrt(1 ./ (post_precision(mask) - msk(prior_precision, mask)));
aux_std(~mask) = Inf;
if nargout > 1
gg = (aux_std.^2).*(mu.*post_precision - prior_mean.*prior_precision);
% Get rid of infinities, which disappear in sensible limits anyway:
BIG = 1e100;
gg = min(gg, BIG);
end
function xx = msk(A, mask)
%MSK msk(A, mask) returns A(mask), or just A if A is a scalar.
%
%This is useful for when A is a scalar standing in for an array with all
%elements equal.
if numel(A) == 1
xx = A;
else
xx = A(mask);
end
surr_code/surr_code/update_theta_simple.m0000644000175000017500000000214711415646336017653 0ustar iam23iam23function [theta, U] = update_theta_simple(theta, ff, Lfn, Kfn, theta_Lprior, slice_width, U)
%UPDATE_THETA_SIMPLE Standard slice-sampling MCMC update to GP hyper-param
% Iain Murray, January 2010
Ufn = @(th) chol(Kfn(th));
DEFAULT('theta_Lprior', @(l) log(double((l>log(0.1)) && (l 0);
slice_width = abs(slice_width);
slice_fn = @(pp, Lpstar_min) eval_particle(pp, Lpstar_min, Lfn, theta_Lprior, Ufn);
particle = slice_sweep(particle, slice_fn, slice_width, step_out);
theta = particle.pos;
U = particle.U;
function pp = eval_particle(pp, Lpstar_min, Lfn, theta_Lprior, U)
% Prior
theta = pp.pos;
Ltprior = theta_Lprior(theta);
if Ltprior == -Inf
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
if ~isnumeric(U)
U = U(theta);
end
Lfprior = -0.5*(pp.ff'*solve_chol(U, pp.ff)) - sum(log(diag(U))); % + const
pp.Lpstar = Ltprior + Lfprior + Lfn(pp.ff);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
pp.U = U;
surr_code/surr_code/gpml/0000755000175000017500000000000011415646336014410 5ustar iam23iam23surr_code/surr_code/gpml/minimize.m0000644000175000017500000002161011415646336016407 0ustar iam23iam23function [X, fX, i] = minimize(X, f, length, varargin)
% Minimize a differentiable multivariate function.
%
% Usage: [X, fX, i] = minimize(X, f, length, P1, P2, P3, ... )
%
% where the starting point is given by "X" (D by 1), and the function named in
% the string "f", must return a function value and a vector of partial
% derivatives of f wrt X, the "length" gives the length of the run: if it is
% positive, it gives the maximum number of line searches, if negative its
% absolute gives the maximum allowed number of function evaluations. You can
% (optionally) give "length" a second component, which will indicate the
% reduction in function value to be expected in the first line-search (defaults
% to 1.0). The parameters P1, P2, P3, ... are passed on to the function f.
%
% The function returns when either its length is up, or if no further progress
% can be made (ie, we are at a (local) minimum, or so close that due to
% numerical problems, we cannot get any closer). NOTE: If the function
% terminates within a few iterations, it could be an indication that the
% function values and derivatives are not consistent (ie, there may be a bug in
% the implementation of your "f" function). The function returns the found
% solution "X", a vector of function values "fX" indicating the progress made
% and "i" the number of iterations (line searches or function evaluations,
% depending on the sign of "length") used.
%
% The Polack-Ribiere flavour of conjugate gradients is used to compute search
% directions, and a line search using quadratic and cubic polynomial
% approximations and the Wolfe-Powell stopping criteria is used together with
% the slope ratio method for guessing initial step sizes. Additionally a bunch
% of checks are made to make sure that exploration is taking place and that
% extrapolation will not be unboundedly large.
%
% See also: checkgrad
%
% Copyright (C) 2001 - 2006 by Carl Edward Rasmussen (2006-09-08).
INT = 0.1; % don't reevaluate within 0.1 of the limit of the current bracket
EXT = 3.0; % extrapolate maximum 3 times the current step-size
MAX = 20; % max 20 function evaluations per line search
RATIO = 10; % maximum allowed slope ratio
SIG = 0.1; RHO = SIG/2; % SIG and RHO are the constants controlling the Wolfe-
% Powell conditions. SIG is the maximum allowed absolute ratio between
% previous and new slopes (derivatives in the search direction), thus setting
% SIG to low (positive) values forces higher precision in the line-searches.
% RHO is the minimum allowed fraction of the expected (from the slope at the
% initial point in the linesearch). Constants must satisfy 0 < RHO < SIG < 1.
% Tuning of SIG (depending on the nature of the function to be optimized) may
% speed up the minimization; it is probably not worth playing much with RHO.
% The code falls naturally into 3 parts, after the initial line search is
% started in the direction of steepest descent. 1) we first enter a while loop
% which uses point 1 (p1) and (p2) to compute an extrapolation (p3), until we
% have extrapolated far enough (Wolfe-Powell conditions). 2) if necessary, we
% enter the second loop which takes p2, p3 and p4 chooses the subinterval
% containing a (local) minimum, and interpolates it, unil an acceptable point
% is found (Wolfe-Powell conditions). Note, that points are always maintained
% in order p0 <= p1 <= p2 < p3 < p4. 3) compute a new search direction using
% conjugate gradients (Polack-Ribiere flavour), or revert to steepest if there
% was a problem in the previous line-search. Return the best value so far, if
% two consecutive line-searches fail, or whenever we run out of function
% evaluations or line-searches. During extrapolation, the "f" function may fail
% either with an error or returning Nan or Inf, and minimize should handle this
% gracefully.
if max(size(length)) == 2, red=length(2); length=length(1); else red=1; end
if length>0, S='Linesearch'; else S='Function evaluation'; end
i = 0; % zero the run length counter
ls_failed = 0; % no previous line search has failed
[f0 df0] = feval(f, X, varargin{:}); % get function value and gradient
fX = f0;
i = i + (length<0); % count epochs?!
s = -df0; d0 = -s'*s; % initial search direction (steepest) and slope
x3 = red/(1-d0); % initial step is red/(|s|+1)
while i < abs(length) % while not finished
i = i + (length>0); % count iterations?!
X0 = X; F0 = f0; dF0 = df0; % make a copy of current values
if length>0, M = MAX; else M = min(MAX, -length-i); end
while 1 % keep extrapolating as long as necessary
x2 = 0; f2 = f0; d2 = d0; f3 = f0; df3 = df0;
success = 0;
while ~success && M > 0
try
M = M - 1; i = i + (length<0); % count epochs?!
[f3 df3] = feval(f, X+x3*s, varargin{:});
if isnan(f3) || isinf(f3) || any(isnan(df3)+isinf(df3)), error(''), end
success = 1;
catch % catch any error which occured in f
x3 = (x2+x3)/2; % bisect and try again
end
end
if f3 < F0, X0 = X+x3*s; F0 = f3; dF0 = df3; end % keep best values
d3 = df3'*s; % new slope
if d3 > SIG*d0 || f3 > f0+x3*RHO*d0 || M == 0 % are we done extrapolating?
break
end
x1 = x2; f1 = f2; d1 = d2; % move point 2 to point 1
x2 = x3; f2 = f3; d2 = d3; % move point 3 to point 2
A = 6*(f1-f2)+3*(d2+d1)*(x2-x1); % make cubic extrapolation
B = 3*(f2-f1)-(2*d1+d2)*(x2-x1);
x3 = x1-d1*(x2-x1)^2/(B+sqrt(B*B-A*d1*(x2-x1))); % num. error possible, ok!
if ~isreal(x3) || isnan(x3) || isinf(x3) || x3 < 0 % num prob | wrong sign?
x3 = x2*EXT; % extrapolate maximum amount
elseif x3 > x2*EXT % new point beyond extrapolation limit?
x3 = x2*EXT; % extrapolate maximum amount
elseif x3 < x2+INT*(x2-x1) % new point too close to previous point?
x3 = x2+INT*(x2-x1);
end
end % end extrapolation
while (abs(d3) > -SIG*d0 || f3 > f0+x3*RHO*d0) && M > 0 % keep interpolating
if d3 > 0 || f3 > f0+x3*RHO*d0 % choose subinterval
x4 = x3; f4 = f3; d4 = d3; % move point 3 to point 4
else
x2 = x3; f2 = f3; d2 = d3; % move point 3 to point 2
end
if f4 > f0
x3 = x2-(0.5*d2*(x4-x2)^2)/(f4-f2-d2*(x4-x2)); % quadratic interpolation
else
A = 6*(f2-f4)/(x4-x2)+3*(d4+d2); % cubic interpolation
B = 3*(f4-f2)-(2*d2+d4)*(x4-x2);
x3 = x2+(sqrt(B*B-A*d2*(x4-x2)^2)-B)/A; % num. error possible, ok!
end
if isnan(x3) || isinf(x3)
x3 = (x2+x4)/2; % if we had a numerical problem then bisect
end
x3 = max(min(x3, x4-INT*(x4-x2)),x2+INT*(x4-x2)); % don't accept too close
[f3 df3] = feval(f, X+x3*s, varargin{:});
if f3 < F0, X0 = X+x3*s; F0 = f3; dF0 = df3; end % keep best values
M = M - 1; i = i + (length<0); % count epochs?!
d3 = df3'*s; % new slope
end % end interpolation
if abs(d3) < -SIG*d0 && f3 < f0+x3*RHO*d0 % if line search succeeded
X = X+x3*s; f0 = f3; fX = [fX' f0]'; % update
% variables
%if mod(i,1000) == 0
% fprintf('%s %6i; Value %4.6e\r', S, i, f0);
%end
s = (df3'*df3-df0'*df3)/(df0'*df0)*s - df3; % Polack-Ribiere CG direction
df0 = df3; % swap derivatives
d3 = d0; d0 = df0'*s;
if d0 > 0 % new slope must be negative
s = -df0; d0 = -s'*s; % otherwise use steepest direction
end
x3 = x3 * min(RATIO, d3/(d0-realmin)); % slope ratio but max RATIO
ls_failed = 0; % this line search did not fail
else
X = X0; f0 = F0; df0 = dF0; % restore best point so far
if ls_failed || i > abs(length) % line search failed twice in a row
break; % or we ran out of time, so we give up
end
s = -df0; d0 = -s'*s; % try steepest
x3 = 1/(1-d0);
ls_failed = 1; % this line search failed
end
end
%fprintf('\n');
surr_code/surr_code/gpml/gprSRPP.m0000644000175000017500000000562311415646336016071 0ustar iam23iam23function [mu, S2SR, S2PP] = gprSRPP(logtheta, covfunc, x, INDEX, y, xstar);
% gprSRPP - Carries out approximate Gaussian process regression prediction
% using the subset of regressors (SR) or projected process approximation (PP)
% and the active set specified by INDEX.
%
% Usage
%
% [mu, S2SR, S2PP] = gprSRPP(logtheta, covfunc, x, INDEX, y, xstar)
%
% where
%
% logtheta is a (column) vector of log hyperparameters
% covfunc is the covariance function, which is assumed to
% be a covSum, and the last entry of the sum is covNoise
% x is a n by D matrix of training inputs
% INDEX is a vector of length m <= n used to specify which
% inputs are used in the active set
% y is a (column) vector (of size n) of targets
% xstar is a nstar by D matrix of test inputs
% mu is a (column) vector (of size nstar) of prediced means
% S2SR is a (column) vector (of size nstar) of predicted variances under SR
% S2PP is a (column) vector (of size nsstar) of predicted variances under PP
%
% where D is the dimension of the input.
%
% For more help on covariance functions, see "help covFunctions".
%
% (C) copyright 2005, 2006 by Chris Williams (2006-03-29).
if ischar(covfunc), covfunc = cellstr(covfunc); end % convert to cell if needed
[n, D] = size(x);
if eval(feval(covfunc{:})) ~= size(logtheta, 1)
error('Error: Number of parameters do not agree with covariance function')
end
% we check that the covfunc cell array is a covSum, with last entry 'covNoise'
if length(covfunc) ~= 2 | ~strcmp(covfunc(1), 'covSum') | ...
~strcmp(covfunc{2}(end), 'covNoise')
error('The covfunc must be "covSum" whose last summand must be "covNoise"')
end
sigma2n = exp(2*logtheta(end)); % noise variance
[nstar, D] = size(xstar); % number of test cases and dimension of input space
m = length(INDEX); % size of subset
% note, that in the following Kmm is computed by extracting the relevant part
% of Knm, thus it will be the "noise-free" covariance (although the covfunc
% specification does include noise).
[v, Knm] = feval(covfunc{:}, logtheta, x, x(INDEX,:));
Kmm = Knm(INDEX,:); % Kmm is a noise-free covariance matrix
jitter = 1e-9*trace(Kmm);
Kmm = Kmm + jitter*eye(m); % as suggested in code of jqc
% a is cov between active set and test points and vstar is variances at test
% points, incl noise variance
[vstar, a] = feval(covfunc{:}, logtheta, x(INDEX,:), xstar);
mu = a'*((sigma2n*Kmm + Knm'*Knm)\(Knm'*y)); % pred mean eq. (8.14) and (8.26)
e = (sigma2n*Kmm + Knm'*Knm) \ a;
S2SR = sigma2n*sum(a.*e,1)'; % noise-free SR variance, eq. 8.15
S2PP = vstar-sum(a.*(Kmm\a),1)'+S2SR; % PP variance eq. (8.27) including noise
S2SR = S2SR + sigma2n; % SR variance inclusing noise
surr_code/surr_code/gpml/covNoise.m0000644000175000017500000000205111415646336016351 0ustar iam23iam23function [A, B] = covNoise(logtheta, x, z);
% Independent covariance function, ie "white noise", with specified variance.
% The covariance function is specified as:
%
% k(x^p,x^q) = s2 * \delta(p,q)
%
% where s2 is the noise variance and \delta(p,q) is a Kronecker delta function
% which is 1 iff p=q and zero otherwise. The hyperparameter is
%
% logtheta = [ log(sqrt(s2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen, 2006-03-24.
if nargin == 0, A = '1'; return; end % report number of parameters
s2 = exp(2*logtheta); % noise variance
if nargin == 2 % compute covariance matrix
A = s2*eye(size(x,1));
elseif nargout == 2 % compute test set covariances
A = s2;
B = 0; % zeros cross covariance by independence
else % compute derivative matrix
A = 2*s2*eye(size(x,1));
end
surr_code/surr_code/gpml/binaryGP.m0000644000175000017500000001543511415646336016311 0ustar iam23iam23function [out1, out2, out3, out4, alpha, sW, L] = binaryGP(hyper, approx, covfunc, lik, x, y, xstar)
% Approximate binary Gaussian Process classification. Two modes are possible:
% training or testing: if no test cases are supplied, then the approximate
% negative log marginal likelihood and its partial derivatives wrt the
% hyperparameters is computed; this mode is used to fit the hyperparameters. If
% test cases are given, then the test set predictive probabilities are
% returned. Exact inference is intractible, the function uses a specified
% approximation method (see approximations.m), flexible covariance functions
% (see covFunctions.m) and likelihood functions (see likelihoods.m).
%
% usage: [nlZ, dnlZ ] = binaryGP(hyper, approx, covfunc, lik, x, y);
% or: [p,mu,s2,nlZ] = binaryGP(hyper, approx, covfunc, lik, x, y, xstar);
%
% where:
%
% hyper is a column vector of hyperparameters
% approx is a function specifying an approximation method for inference
% covfunc is the name of the covariance function (see below)
% lik is the name of the likelihood function
% x is a n by D matrix of training inputs
% y is a (column) vector (of size n) of binary +1/-1 targets
% xstar is a nn by D matrix of test inputs
% nlZ is the returned value of the negative log marginal likelihood
% dnlZ is a (column) vector of partial derivatives of the negative
% log marginal likelihood wrt each hyperparameter
% p is a (column) vector (of length nn) of predictive probabilities
% mu is a (column) vector (of length nn) of predictive latent means
% s2 is a (column) vector (of length nn) of predictive latent variances
%
% The length of the vector of hyperparameters depends on the covariance
% function, as specified by the "covfunc" input to the function, specifying the
% name of a covariance function. A number of different covariance function are
% implemented, and it is not difficult to add new ones. See covFunctions.m for
% the details.
%
% The "lik" input argument specifies the name of the likelihood function (see
% likelihoods.m).
%
% The "approx" input argument to the function specifies an approximation method
% (see approximations.m). An approximation method returns a representation of
% the approximate Gaussian posterior. Usually, the approximate posterior admits
% the form N(m=K*alpha, V=inv(inv(K)+W)), where alpha is a vector and W is
% diagonal. The approximation method returns:
%
% alpha is a (sparse or full column vector) containing inv(K)*m, where K
% is the prior covariance matrix and m the approx posterior mean
% sW is a (sparse or full column) vector containing diagonal of sqrt(W)
% the approximate posterior covariance matrix is inv(inv(K)+W)
% L is a (sparse or full) matrix, L = chol(sW*K*sW+eye(n))
%
% In cases where the approximate posterior variance does not admit the form
% V=inv(inv(K)+W) with diagonal W, L contains instead -inv(K+inv(W)), and sW
% is unused.
%
% The alpha parameter may be sparse. In that case sW and L can either be sparse
% or full (retaining only the non-zero rows and columns, as indicated by the
% sparsity structure of alpha). The L paramter is allowed to be empty, in
% which case it will be computed.
%
% The function can conveniently be used with the "minimize" function to train
% a Gaussian Process, eg:
%
% [hyper, fX, i] = minimize(hyper, 'binaryGP', length, 'approxEP', 'covSEiso', 'logistic', x, y);
%
% where "length" gives the length of the run: if it is positive, it gives the
% maximum number of line searches, if negative its absolute gives the maximum
% allowed number of function evaluations.
%
% Copyright (c) 2007 Carl Edward Rasmussen and Hannes Nickisch, 2007-06-25.
if nargin<6 || nargin>7
disp('Usage: [nlZ, dnlZ ] = binaryGP(hyper,approx,covfunc,lik,x,y);')
disp(' or: [p,mu,s2,nlZ] = binaryGP(hyper,approx,covfunc,lik,x,y,xstar);')
return
end
if ischar(covfunc), covfunc = cellstr(covfunc); end % convert to cell if needed
[n, D] = size(x); Nhyp = eval(feval(covfunc{:}));
if Nhyp ~= size(hyper, 1)
error('Number of hyperparameters disagrees with covariance function')
end
if numel(approx)==0, approx='approxLA'; end % set a default value
if numel(lik)==0, lik ='cumGauss'; end % set a default value
try % call the approximation method
[alpha, sW, L, nlZ, dnlZ] = feval(approx, hyper, covfunc, lik, x, y);
catch
warning('The approximation did not properly return') % values to ...
nlZ=Inf; dnlZ=zeros(Nhyp,1); alpha=sparse(NaN); sW=NaN; L=1; % ... continue
end
if nargin==6 % return negative log marginal likelihood
out1 = nlZ;
if nargout>1 % where partial derivates requested?
out2 = dnlZ; out3=[]; out4=[];
end
else % otherwise do prediction based on the approximation
if issparse(alpha) % handle things for sparse representations
nz = alpha ~= 0; % determine nonzero indices
if issparse(L), L = full(L(nz,nz) ); end % convert L and sW if necessary
if issparse(sW), sW = full(sW(nz)); end
else nz = true(n,1); end % non-sparse representation
if numel(L)==0 % in case L is not provided, we compute it
K = feval(covfunc{:},hyper,x(nz,:));
L = chol(eye(sum(nz))+sW*sW'.*K);
end
Ltril =all(all(tril(L,-1)==0)); % determine if L is an upper triangular matrix
out1=[]; out2=[]; out3=[]; out4=nlZ; % init output arguments
nstar = size(xstar,1); % number of data points
nperchk = 1000; % number of data points per chunk
nact = 0; % number of processed data points
while nact use Cholesky parameters (alpha,sW,L)
v = L'\(repmat(sW,1,length(id)).*kstar);
s2 = kstarstar - sum(v.*v,1)'; % predictive variances
else % L is not triangular => use alternative parameterisation
s2 = kstarstar + sum(kstar.*(L*kstar),1)'; % predictive variances
end
p = feval(lik, [], mu, s2); % predictive probabilities
out1=[out1;p]; out2=[out2;mu]; out3=[out3;s2]; % assign output arguments
nact = id(end); % set counter to index of last processed data point
end
end
surr_code/surr_code/gpml/binaryLaplaceGP.m0000644000175000017500000000577711415646336017603 0ustar iam23iam23function varargout = binaryLaplaceGP(hyper, covfunc, lik, varargin)
% binaryLaplaceGP - Laplace's approximation for binary Gaussian process
% classification. Two modes are possible: training or testing: if no test
% cases are supplied, then the approximate negative log marginal likelihood
% and its partial derivatives wrt the hyperparameters is computed; this mode is
% used to fit the hyperparameters. If test cases are given, then the test set
% predictive probabilities are returned. The program is flexible in allowing
% several different likelihood functions and a multitude of covariance
% functions.
%
% usage: [nlZ, dnlZ ] = binaryLaplaceGP(hyper, covfunc, lik, x, y);
% or: [p, mu, s2, nlZ] = binaryLaplaceGP(hyper, covfunc, lik, x, y, xstar);
%
% where:
%
% hyper is a (column) vector of hyperparameters
% covfunc is the name of the covariance function (see below)
% lik is the name of the likelihood function (see below)
% x is a n by D matrix of training inputs
% y is a (column) vector (of size n) of binary +1/-1 targets
% xstar is a nn by D matrix of test inputs
% nlZ is the returned value of the negative log marginal likelihood
% dnlZ is a (column) vector of partial derivatives of the negative
% log marginal likelihood wrt each log hyperparameter
% p is a (column) vector (of length nn) of predictive probabilities
% mu is a (column) vector (of length nn) of predictive latent means
% s2 is a (column) vector (of length nn) of predictive latent variances
%
% The length of the vector of log hyperparameters depends on the covariance
% function, as specified by the "covfunc" input to the function, specifying the
% name of a covariance function. A number of different covariance function are
% implemented, and it is not difficult to add new ones. See "help covFunctions"
% for the details.
%
% The shape of the likelihood function is given by the "lik" input to the
% function, specifying the name of the likelihood function. The two implemented
% likelihood functions are:
%
% logistic the logistic function: 1/(1+exp(-x))
% cumGauss the cumulative Gaussian (error function)
%
% The function can conveniently be used with the "minimize" function to train
% a Gaussian process, eg:
%
% [hyper, fX, i] = minimize(hyper, 'binaryLaplaceGP', length, 'covSEiso',
% 'logistic', x, y);
%
% Copyright (c) 2004, 2005, 2006, 2007 by Carl Edward Rasmussen, 2007-02-19.
if nargin<5 || nargin>6
disp('Usage: [nlZ, dnlZ ] = binaryLaplaceGP(hyper, covfunc, lik, x, y);')
disp(' or: [p, mu, s2, nlZ] = binaryLaplaceGP(hyper, covfunc, lik, x, y, xstar);')
return
end
% Note, this function is just a wrapper provided for backward compatibility,
% the functionality is now provided by the more general binaryGP function.
varargout = cell(nargout, 1); % allocate the right number of output arguments
[varargout{:}] = binaryGP(hyper, 'approxLA', covfunc, lik, varargin{:});
surr_code/surr_code/gpml/gpr.m0000644000175000017500000000537311415646336015366 0ustar iam23iam23function [out1, out2] = gpr(logtheta, covfunc, x, y, xstar);
% gpr - Gaussian process regression, with a named covariance function. Two
% modes are possible: training and prediction: if no test data are given, the
% function returns minus the log likelihood and its partial derivatives with
% respect to the hyperparameters; this mode is used to fit the hyperparameters.
% If test data are given, then (marginal) Gaussian predictions are computed,
% whose mean and variance are returned. Note that in cases where the covariance
% function has noise contributions, the variance returned in S2 is for noisy
% test targets; if you want the variance of the noise-free latent function, you
% must substract the noise variance.
%
% usage: [nlml dnlml] = gpr(logtheta, covfunc, x, y)
% or: [mu S2] = gpr(logtheta, covfunc, x, y, xstar)
%
% where:
%
% logtheta is a (column) vector of log hyperparameters
% covfunc is the covariance function
% x is a n by D matrix of training inputs
% y is a (column) vector (of size n) of targets
% xstar is a nn by D matrix of test inputs
% nlml is the returned value of the negative log marginal likelihood
% dnlml is a (column) vector of partial derivatives of the negative
% log marginal likelihood wrt each log hyperparameter
% mu is a (column) vector (of size nn) of prediced means
% S2 is a (column) vector (of size nn) of predicted variances
%
% For more help on covariance functions, see "help covFunctions".
%
% (C) copyright 2006 by Carl Edward Rasmussen (2006-03-20).
if ischar(covfunc), covfunc = cellstr(covfunc); end % convert to cell if needed
[n, D] = size(x);
if eval(feval(covfunc{:})) ~= size(logtheta, 1)
error('Error: Number of parameters do not agree with covariance function')
end
K = feval(covfunc{:}, logtheta, x); % compute training set covariance matrix
L = chol(K)'; % cholesky factorization of the covariance
alpha = solve_chol(L',y);
if nargin == 4 % if no test cases, compute the negative log marginal likelihood
out1 = 0.5*y'*alpha + sum(log(diag(L))) + 0.5*n*log(2*pi);
if nargout == 2 % ... and if requested, its partial derivatives
out2 = zeros(size(logtheta)); % set the size of the derivative vector
W = L'\(L\eye(n))-alpha*alpha'; % precompute for convenience
for i = 1:length(out2)
out2(i) = sum(sum(W.*feval(covfunc{:}, logtheta, x, i)))/2;
end
end
else % ... otherwise compute (marginal) test predictions ...
[Kss, Kstar] = feval(covfunc{:}, logtheta, x, xstar); % test covariances
out1 = Kstar' * alpha; % predicted means
if nargout == 2
v = L\Kstar;
out2 = Kss - sum(v.*v)';
end
end
surr_code/surr_code/gpml/covConst.m0000644000175000017500000000140611415646336016365 0ustar iam23iam23function [A, B] = covConst(logtheta, x, z);
% covariance function for a constant function. The covariance function is
% parameterized as:
%
% k(x^p,x^q) = 1/s2;
%
% The scalar hyperparameter is:
%
% logtheta = [ log(sqrt(s2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2007-07-24)
if nargin == 0, A = '1'; return; end % report number of parameters
is2 = exp(-2*logtheta); % s2 inverse
if nargin == 2
A = is2;
elseif nargout == 2 % compute test set covariances
A = is2;
B = is2;
else % compute derivative matrix
A = -2*is2*ones(size(x,1));
end
surr_code/surr_code/gpml/covLINard.m0000644000175000017500000000202611415646336016407 0ustar iam23iam23function [A, B] = covLINard(logtheta, x, z);
% Linear covariance function with Automatic Relevance Determination (ARD). The
% covariance function is parameterized as:
%
% k(x^p,x^q) = x^p'*inv(P)*x^q
%
% where the P matrix is diagonal with ARD parameters ell_1^2,...,ell_D^2, where
% D is the dimension of the input space. The hyperparameters are:
%
% logtheta = [ log(ell_1)
% log(ell_2)
% .
% log(ell_D) ]
%
% Note that there is no bias term; use covConst to add a bias.
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-24)
if nargin == 0, A = 'D'; return; end % report number of parameters
ell = exp(logtheta);
x = x*diag(1./ell);
if nargin == 2
A = x*x';
elseif nargout == 2 % compute test set covariances
z = z*diag(1./ell);
A = sum(z.*z,2);
B = x*z';
else % compute derivative matrices
A = -2*x(:,z)*x(:,z)';
end
surr_code/surr_code/gpml/covPeriodic.m0000644000175000017500000000207511415646336017040 0ustar iam23iam23function [A, B] = covPeriodic(logtheta, x, z);
% covariance function for a smooth periodic function, with unit period. The
% covariance function is:
%
% k(x^p, x^q) = sf2 * exp(-2*sin^2(pi*(x_p-x_q))/ell^2)
%
% where the hyperparameters are:
%
% logtheta = [ log(ell)
% log(sqrt(sf2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-04-07)
if nargin == 0, A = '2'; return; end
[n D] = size(x);
ell = exp(logtheta(1));
sf2 = exp(2*logtheta(2));
if nargin == 2
A = sf2*exp(-2*(sin(pi*(repmat(x,1,n)-repmat(x',n,1)))/ell).^2);
elseif nargout == 2 % compute test set covariances
[nn D] = size(z);
A = sf2*ones(nn,1);
B = sf2*exp(-2*(sin(pi*(repmat(x,1,nn)-repmat(z',n,1)))/ell).^2);
else % compute derivative matrices
if z == 1
r = (sin(pi*(repmat(x,1,n)-repmat(x',n,1)))/ell).^2;
A = 4*sf2*exp(-2*r).*r;
else
A = 2*sf2*exp(-2*(sin(pi*(repmat(x,1,n)-repmat(x',n,1)))/ell).^2);
end
end
surr_code/surr_code/gpml/Makefile0000644000175000017500000000020311415646336016043 0ustar iam23iam23all: sq_dist.mexglx solve_chol.mexglx
sq_dist.mexglx: sq_dist.c
mex sq_dist.c
solve_chol.mexglx: solve_chol.c
mex solve_chol.c
surr_code/surr_code/gpml/covSEiso.m0000644000175000017500000000250511415646336016322 0ustar iam23iam23function [A, B] = covSEiso(loghyper, x, z);
% Squared Exponential covariance function with isotropic distance measure. The
% covariance function is parameterized as:
%
% k(x^p,x^q) = sf2 * exp(-(x^p - x^q)'*inv(P)*(x^p - x^q)/2)
%
% where the P matrix is ell^2 times the unit matrix and sf2 is the signal
% variance. The hyperparameters are:
%
% loghyper = [ log(ell)
% log(sqrt(sf2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2007-06-25)
if nargin == 0, A = '2'; return; end % report number of parameters
[n D] = size(x);
ell = exp(loghyper(1)); % characteristic length scale
sf2 = exp(2*loghyper(2)); % signal variance
if nargin == 2
A = sf2*exp(-sq_dist(x'/ell)/2);
elseif nargout == 2 % compute test set covariances
A = sf2*ones(size(z,1),1);
B = sf2*exp(-sq_dist(x'/ell,z'/ell)/2);
else % compute derivative matrix
if z == 1 % first parameter
A = sf2*exp(-sq_dist(x'/ell)/2).*sq_dist(x'/ell);
else % second parameter
A = 2*sf2*exp(-sq_dist(x'/ell)/2);
end
end
surr_code/surr_code/gpml/likelihoods.m0000644000175000017500000000230611415646336017075 0ustar iam23iam23% likelihood: likelihood functions are provided to be used by the binaryGP
% function, for binary Gaussian process classification. Two likelihood
% functions are provided:
%
% logistic
% cumGauss
%
% The likelihood functions have three possible modes, the mode being selected
% as follows (where "lik" stands for any likelihood function):
%
% (log) likelihood evaluation: [p, lp] = lik(y, f)
%
% where y are the targets, f the latent function values, p the probabilities
% and lp the log probabilities. All vectors are the same size.
%
% derivatives (of the log): [lp, dlp, d2lp, d3lp] = lik(y, f, 'deriv')
%
% where lp is a number (sum of the log probablities for each case) and the
% derivatives (up to order 3) of the logs wrt the latent values are vectors
% (as the likelihood factorizes there are no mixed terms).
%
% moments wrt Gaussian measure: [m0, m1, m2] = lik(y, mu, var)
%
% where mk is the k'th moment: \int f^k lik(y,f) N(f|mu,var) df, and if y is
% empty, it is assumed to be a vector of ones.
%
% See the help for the individual likelihood for the computations specific to
% each likelihood function.
%
% Copyright (c) 2007 Carl Edward Rasmussen and Hannes Nickisch 2007-04-11.
surr_code/surr_code/gpml/solve_chol.c0000644000175000017500000000232411415646336016712 0ustar iam23iam23/* solve_chol - solve a linear system A*X = B using the cholesky factorization
of A (where A is square, symmetric and positive definite.
Copyright (c) 2004 Carl Edward Rasmussen. 2004-10-19. */
#include "mex.h"
#include
extern int dpotrs_(char *, int *, int *, double *, int *, double *, int *, int *);
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
double *C;
int n, m, q;
if (nrhs != 2 || nlhs > 1) /* check the input */
mexErrMsgTxt("Usage: X = solve_chol(R, B)");
n = mxGetN(prhs[0]);
if (n != mxGetM(prhs[0]))
mexErrMsgTxt("Error: First argument matrix must be square");
if (n != mxGetM(prhs[1]))
mexErrMsgTxt("Error: First and second argument matrices must have same number of rows");
m = mxGetN(prhs[1]);
plhs[0] = mxCreateDoubleMatrix(n, m, mxREAL); /* allocate space for output */
C = mxGetPr(plhs[0]);
if (n==0) return; /* if argument was empty matrix, do no more */
memcpy(C,mxGetPr(prhs[1]),n*m*sizeof(double)); /* copy argument matrix */
dpotrs_("U", &n, &m, mxGetPr(prhs[0]), &n, C, &n, &q); /* solve system */
if (q > 0)
mexErrMsgTxt("Error: illegal input to solve_chol");
}
surr_code/surr_code/gpml/approxLA.m0000644000175000017500000000602611415646336016320 0ustar iam23iam23function [alpha, sW, L, nlZ, dnlZ] = approxLA(hyper, covfunc, lik, x, y)
% Laplace approximation to the posterior Gaussian Process.
% The function takes a specified covariance function (see covFunction.m) and
% likelihood function (see likelihoods.m), and is designed to be used with
% binaryGP.m. See also approximations.m.
%
% Copyright (c) 2006, 2007 Carl Edward Rasmussen and Hannes Nickisch 2007-03-29
persistent best_alpha best_nlZ % copy of the best alpha and its obj value
tol = 1e-6; % tolerance for when to stop the Newton iterations
n = size(x,1);
K = feval(covfunc{:}, hyper, x); % evaluate the covariance matrix
if any(size(best_alpha) ~= [n,1]) % find a good starting point for alpha and f
f = zeros(n,1); alpha = f; % start at zero
[lp,dlp,d2lp] = feval(lik,y,f,'deriv'); W=-d2lp;
Psi_new = lp; best_nlZ = Inf;
else
alpha = best_alpha; f = K*alpha; % try best so far
[lp,dlp,d2lp] = feval(lik,y,f,'deriv'); W=-d2lp;
Psi_new = -alpha'*f/2 + lp;
if Psi_new < -n*log(2) % if zero is better ..
f = zeros(n,1); alpha = f; % .. go back
[lp,dlp,d2lp] = feval(lik,y,f,'deriv'); W=-d2lp;
Psi_new = -alpha'*f/2 + lp;
end
end
Psi_old = -Inf; % make sure while loop starts
while Psi_new - Psi_old > tol % begin Newton's iterations
Psi_old = Psi_new; alpha_old = alpha;
sW = sqrt(W);
L = chol(eye(n)+sW*sW'.*K); % L'*L=B=eye(n)+sW*K*sW
b = W.*f+dlp;
alpha = b - sW.*solve_chol(L,sW.*(K*b));
f = K*alpha;
[lp,dlp,d2lp,d3lp] = feval(lik,y,f,'deriv'); W=-d2lp;
Psi_new = -alpha'*f/2 + lp;
i = 0;
while i < 10 && Psi_new < Psi_old % if objective didn't increase
alpha = (alpha_old+alpha)/2; % reduce step size by half
f = K*alpha;
[lp,dlp,d2lp,d3lp] = feval(lik,y,f,'deriv'); W=-d2lp;
Psi_new = -alpha'*f/2 + lp;
i = i+1;
end
end % end Newton's iterations
sW = sqrt(W); % recalculate L
L = chol(eye(n)+sW*sW'.*K); % L'*L=B=eye(n)+sW*K*sW
nlZ = alpha'*f/2 - lp + sum(log(diag(L))); % approx neg log marg likelihood
if nlZ < best_nlZ % if best so far ..
best_alpha = alpha; best_nlZ = nlZ; % .. then remember for next call
end
if nargout >= 4 % do we want derivatives?
dnlZ = zeros(size(hyper)); % allocate space for derivatives
Z = repmat(sW,1,n).*solve_chol(L, diag(sW));
C = L'\(repmat(sW,1,n).*K);
s2 = 0.5*(diag(K)-sum(C.^2,1)').*d3lp;
for j=1:length(hyper)
dK = feval(covfunc{:}, hyper, x, j);
s1 = alpha'*dK*alpha/2-sum(sum(Z.*dK))/2;
b = dK*dlp;
s3 = b-K*(Z*b);
dnlZ(j) = -s1-s2'*s3;
end
end
surr_code/surr_code/gpml/covRQiso.m0000644000175000017500000000315711415646336016341 0ustar iam23iam23function [A, B] = covRQiso(loghyper, x, z)
% Rational Quadratic covariance function with isotropic distance measure. The
% covariance function is parameterized as:
%
% k(x^p,x^q) = sf2 * [1 + (x^p - x^q)'*inv(P)*(x^p - x^q)/(2*alpha)]^(-alpha)
%
% where the P matrix is ell^2 times the unit matrix, sf2 is the signal
% variance and alpha is the shape parameter for the RQ covariance. The
% hyperparameters are:
%
% loghyper = [ log(ell)
% log(sqrt(sf2))
% log(alpha) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-09-08)
if nargin == 0, A = '3'; return; end
[n, D] = size(x);
persistent K;
ell = exp(loghyper(1));
sf2 = exp(2*loghyper(2));
alpha = exp(loghyper(3));
if nargin == 2 % compute covariance matrix
K = (1+0.5*sq_dist(x'/ell)/alpha);
A = sf2*(K.^(-alpha));
elseif nargout == 2 % compute test set covariances
A = sf2*ones(size(z,1),1);
B = sf2*((1+0.5*sq_dist(x'/ell,z'/ell)/alpha).^(-alpha));
else % compute derivative matrices
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = (1+0.5*sq_dist(x'/ell)/alpha);
end
if z == 1 % length scale parameters
A = sf2*K.^(-alpha-1).*sq_dist(x'/ell);
elseif z == 2 % magnitude parameter
A = 2*sf2*(K.^(-alpha));
else
A = sf2*K.^(-alpha).*(0.5*sq_dist(x'/ell)./K - alpha*log(K));
clear K;
end
end
surr_code/surr_code/gpml/covMatern3iso.m0000644000175000017500000000256011415646336017325 0ustar iam23iam23function [A, B] = covMatern3iso(loghyper, x, z)
% Matern covariance function with nu = 3/2 and isotropic distance measure. The
% covariance function is:
%
% k(x^p,x^q) = s2f * (1 + sqrt(3)*d(x^p,x^q)) * exp(-sqrt(3)*d(x^p,x^q))
%
% where d(x^p,x^q) is the distance sqrt((x^p-x^q)'*inv(P)*(x^p-x^q)), P is ell
% times the unit matrix and sf2 is the signal variance. The hyperparameters
% are:
%
% loghyper = [ log(ell)
% log(sqrt(sf2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-24)
if nargin == 0, A = '2'; return; end
persistent K;
[n, D] = size(x);
ell = exp(loghyper(1));
sf2 = exp(2*loghyper(2));
x = sqrt(3)*x/ell;
if nargin == 2 % compute covariance matrix
A = sqrt(sq_dist(x'));
K = sf2*exp(-A).*(1+A);
A = K;
elseif nargout == 2 % compute test set covariances
z = sqrt(3)*z/ell;
A = sf2;
B = sqrt(sq_dist(x',z'));
B = sf2*exp(-B).*(1+B);
else % compute derivative matrices
if z == 1
A = sf2*sq_dist(x').*exp(-sqrt(sq_dist(x')));
else
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = sqrt(sq_dist(x'));
K = sf2*exp(-K).*(1+K);
end
A = 2*K;
clear K;
end
end
surr_code/surr_code/gpml/covProd.m0000644000175000017500000000502611415646336016205 0ustar iam23iam23function [A, B] = covProd(covfunc, logtheta, x, z);
% covProd - compose a covariance function as the product of other covariance
% functions. This function doesn't actually compute very much on its own, it
% merely does some bookkeeping, and calls other covariance functions to do the
% actual work.
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen, 2006-04-06.
for i = 1:length(covfunc) % iterate over covariance functions
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
j(i) = cellstr(feval(f{:}));
end
if nargin == 1, % report number of parameters
A = char(j(1)); for i=2:length(covfunc), A = [A, '+', char(j(i))]; end
return
end
[n, D] = size(x);
v = []; % v vector indicates to which covariance parameters belong
for i = 1:length(covfunc), v = [v repmat(i, 1, eval(char(j(i))))]; end
switch nargin
case 3 % compute covariance matrix
A = ones(n, n); % allocate space for covariance matrix
for i = 1:length(covfunc) % iteration over factor functions
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
A = A .* feval(f{:}, logtheta(v==i), x); % multiply covariances
end
case 4 % compute derivative matrix or test set covariances
if nargout == 2 % compute test set cavariances
A = ones(size(z,1),1); B = ones(size(x,1),size(z,1)); % allocate space
for i = 1:length(covfunc)
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
[AA BB] = feval(f{:}, logtheta(v==i), x, z); % compute test covariances
A = A .* AA; B = B .* BB; % and accumulate
end
else % compute derivative matrices
A = ones(n, n);
ii = v(z); % which covariance function
j = sum(v(1:z)==ii); % which parameter in that covariance
for i = 1:length(covfunc)
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
if i == ii
A = A .* feval(f{:}, logtheta(v==i), x, j); % multiply derivative
else
A = A .* feval(f{:}, logtheta(v==i), x); % multiply covariance
end
end
end
end
surr_code/surr_code/gpml/binaryEPGP.m0000644000175000017500000000515711415646336016536 0ustar iam23iam23function varargout = binaryEPGP(hyper, covfunc, varargin)
% binaryEPGP - The Expectation Propagation approximation for binary Gaussian
% process classification. Two modes are possible: training or testing: if no
% test cases are supplied, then the approximate negative log marginal
% likelihood and its partial derivatives wrt the hyperparameters is computed;
% this mode is used to fit the hyperparameters. If test cases are given, then
% the test set predictive probabilities are returned. The program is flexible
% in allowing a multitude of covariance functions.
%
% usage: [nlZ, dnlZ ] = binaryEPGP(hyper, covfunc, x, y);
% or: [p, mu, s2, nlZ] = binaryEPGP(hyper, covfunc, x, y, xstar);
%
% where:
%
% hyper is a (column) vector of hyperparameters
% covfunc is the name of the covariance function (see below)
% lik is the name of the likelihood function (see below)
% x is a n by D matrix of training inputs
% y is a (column) vector (of size n) of binary +1/-1 targets
% xstar is a nn by D matrix of test inputs
% nlZ is the returned value of the negative log marginal likelihood
% dnlZ is a (column) vector of partial derivatives of the negative
% log marginal likelihood wrt each log hyperparameter
% p is a (column) vector (of length nn) of predictive probabilities
% mu is a (column) vector (of length nn) of predictive latent means
% s2 is a (column) vector (of length nn) of predictive latent variances
%
% The length of the vector of hyperparameters depends on the covariance
% function, as specified by the "covfunc" input to the function, specifying the
% name of a covariance function. A number of different covariance function are
% implemented, and it is not difficult to add new ones. See "help covFunctions"
% for the details
%
% The function can conveniently be used with the "minimize" function to train
% a Gaussian process, eg:
%
% [hyper, fX, i] = minimize(hyper, 'binaryEPGP', length, 'covSEiso',
% 'logistic', x, y);
%
% Copyright (c) 2004, 2005, 2006, 2007 Carl Edward Rasmussen, 2007-02-19.
if nargin<4 || nargin>5
disp('Usage: [nlZ, dnlZ ] = binaryEPGP(hyper, covfunc, x, y);')
disp(' or: [p, mu, s2, nlZ] = binaryEPGP(hyper, covfunc, x, y, xstar);')
return
end
% Note, this function is just a wrapper provided for backward compatibility,
% the functionality is now provided by the more general binaryGP function.
varargout = cell(nargout, 1); % allocate the right number of output arguments
[varargout{:}] = binaryGP(hyper, 'approxEP', covfunc, 'cumGauss', varargin{:});
surr_code/surr_code/gpml/Contents.m0000644000175000017500000000514011415646336016363 0ustar iam23iam23% gpml: code from Rasmussen & Williams: Gaussian Processes for Machine Learning
% date: 2007-07-25.
%
% approxEP.m - the approximation method for Expectation Propagation
% approxLA.m - the approximation method for Laplace's approximation
% approximations.m - help for approximation methods
% binaryEPGP.m - outdated, the EP approx for binary GP classification
% binaryGP.m - binary Gaussian process classification
% binaryLaplaceGP.m - outdated, Laplace's approx for binary GP classification
%
% covConst.m - covariance for constant functions
% covFunctions.m - help file with overview of covariance functions
% covLINard.m - linear covariance function with ard
% covLINone.m - linear covaraince function
% covMatern3iso.m - Matern covariance function with nu=3/2
% covMatern5iso.m - Matern covaraince function with nu=5/2
% covNNone.m - neural network covariance function
% covNoise.m - independent covaraince function (ie white noise)
% covPeriodic.m - covariance for smooth periodic function, with unit period
% covProd.m - function for multiplying other covariance functions
% covRQard.m - rational quadratic covariance function with ard
% covRQiso.m - isotropic rational quadratic covariance function
% covSEard.m - squared exponential covariance function with ard
% covSEiso.m - isotropic squared exponential covariance function
% covSum.m - function for adding other covariance functions
%
% cumGauss.m - cumulative Gaussian likelihood function
% gpr.m - Gaussian process regression with general covariance
% function
% gprSRPP.m - Implements SR and PP approximations to GPR
% likelihoods.m - help function for classification likelihoods
% logistic.m - logistic likelihood function
% minimize.m - Minimize a differentiable multivariate function
% solve_chol.c - Solve linear equations from the Cholesky factorization
% should be compiled into a mex file
% solve_chol.m - A matlab implementation of the above, used only in case
% the mex file wasn't generated (not very efficient)
% sq_dist.c - Compute a matrix of all pairwise squared distances
% should be compiled into a mex file
% sq_dist.m - A matlab implementation of the above, used only in case
% the mex file wasn't generated (not very efficient)
%
% See also the help for the demonstration scripts in the gpml-demo directory
%
% Copyright (c) 2005, 2006 by Carl Edward Rasmussen and Chris Williams
surr_code/surr_code/gpml/cumGauss.m0000644000175000017500000000646711415646336016372 0ustar iam23iam23function [out1, out2, out3, out4] = cumGauss(y, f, var)
% cumGauss - Cumulative Gaussian likelihood function. The expression for the
% likelihood is cumGauss(t) = normcdf(t) = (1+erf(t/sqrt(2)))/2.
%
% Three modes are provided, for computing likelihoods, derivatives and moments
% respectively, see likelihoods.m for the details. In general, care is taken
% to avoid numerical issues when the arguments are extreme. The
% moments \int f^k cumGauss(y,f) N(f|mu,var) df are calculated analytically.
%
% Copyright (c) 2007 Carl Edward Rasmussen and Hannes Nickisch, 2007-03-29.
if nargin>1, y=sign(y); end % allow only +/- 1 as values
if nargin == 2 % (log) likelihood evaluation
if numel(y)>0, yf = y.*f; else yf = f; end % product of latents and labels
out1 = (1+erf(yf/sqrt(2)))/2; % likelihood
if nargout>1
out2 = zeros(size(f));
b = 0.158482605320942; % quadratic asymptotics approximated at -6
c = -1.785873318175113;
ok = yf>-6; % normal evaluation for larger values
out2( ok) = log(out1(ok));
out2(~ok) = -yf(~ok).^2/2 + b*yf(~ok) + c; % log of sigmoid
end
elseif nargin == 3
if strcmp(var,'deriv') % derivatives of the log
if numel(y)==0, y=1; end
yf = y.*f; % product of latents and labels
[p,lp] = cumGauss(y,f);
out1 = sum(lp);
if nargout>1 % dlp, derivative of log likelihood
n_p = zeros(size(f)); % safely compute Gaussian over cumulative Gaussian
ok = yf>-5; % normal evaluation for large values of yf
n_p(ok) = (exp(-yf(ok).^2/2)/sqrt(2*pi))./p(ok);
bd = yf<-6; % tight upper bound evaluation
n_p(bd) = sqrt(yf(bd).^2/4+1)-yf(bd)/2;
interp = ~ok & ~bd; % linearly interpolate between both of them
tmp = yf(interp);
lam = -5-yf(interp);
n_p(interp) = (1-lam).*(exp(-tmp.^2/2)/sqrt(2*pi))./p(interp) + ...
lam .*(sqrt(tmp.^2/4+1)-tmp/2);
out2 = y.*n_p; % dlp, derivative of log likelihood
if nargout>2 % d2lp, 2nd derivative of log likelihood
out3 = -n_p.^2 - yf.*n_p;
if nargout>3 % d3lp, 3rd derivative of log likelihood
out4 = 2*y.*n_p.^3 +3*f.*n_p.^2 +y.*(f.^2-1).*n_p;
end
end
end
else % compute moments
mu = f; % 2nd argument is the mean of a Gaussian
z = mu./sqrt(1+var);
if numel(y)>0, z=z.*y; end
out1 = cumGauss([],z); % zeroth raw moment
[dummy,n_p] = cumGauss([],z,'deriv'); % Gaussian over cumulative Gaussian
if nargout>1
if numel(y)==0, y=1; end
out2 = mu + y.*var.*n_p./sqrt(1+var); % 1st raw moment
if nargout>2
out3 = 2*mu.*out2 -mu.^2 +var -z.*var.^2.*n_p./(1+var); % 2nd raw moment
out3 = out3.*out1;
end
out2 = out2.*out1;
end
end
else
error('No valid input provided.')
end
surr_code/surr_code/gpml/approxEP.m0000644000175000017500000001175111415646336016331 0ustar iam23iam23function [alpha, sW, L, nlZ, dnlZ] = approxEP(hyper, covfunc, lik, x, y)
% Expectation Propagation approximation to the posterior Gaussian Process.
% The function takes a specified covariance function (see covFunction.m) and
% likelihood function (see likelihoods.m), and is designed to be used with
% binaryGP.m. See also approximations.m. In the EP algorithm, the sites are
% updated in random order, for better performance when cases are ordered
% according to the targets.
%
% Copyright (c) 2006, 2007 Carl Edward Rasmussen and Hannes Nickisch 2007-07-24
persistent best_ttau best_tnu best_nlZ % keep tilde parameters between calls
tol = 1e-3; max_sweep = 10; % tolerance for when to stop EP iterations
n = size(x,1);
K = feval(covfunc{:}, hyper, x); % evaluate the covariance matrix
% A note on naming: variables are given short but descriptive names in
% accordance with Rasmussen & Williams "GPs for Machine Learning" (2006): mu
% and s2 are mean and variance, nu and tau are natural parameters. A leading t
% means tilde, a subscript _ni means "not i" (for cavity parameters), or _n
% for a vector of cavity parameters.
if any(size(best_ttau) ~= [n 1]) % find starting point for tilde parameters
ttau = zeros(n,1); % initialize to zero if we have no better guess
tnu = zeros(n,1);
Sigma = K; % initialize Sigma and mu, the parameters of ..
mu = zeros(n, 1); % .. the Gaussian posterior approximation
nlZ = n*log(2);
best_nlZ = Inf;
else
ttau = best_ttau; % try the tilde values from previous call
tnu = best_tnu;
[Sigma, mu, nlZ, L] = epComputeParams(K, y, ttau, tnu, lik);
if nlZ > n*log(2) % if zero is better ..
ttau = zeros(n,1); % .. then initialize with zero instead
tnu = zeros(n,1);
Sigma = K; % initialize Sigma and mu, the parameters of ..
mu = zeros(n, 1); % .. the Gaussian posterior approximation
nlZ = n*log(2);
end
end
nlZ_old = Inf; sweep = 0; % make sure while loop starts
while nlZ < nlZ_old - tol && sweep < max_sweep % converged or max. sweeps?
nlZ_old = nlZ; sweep = sweep+1;
for i = randperm(n) % iterate EP updates (in random order) over examples
tau_ni = 1/Sigma(i,i)-ttau(i); % first find the cavity distribution ..
nu_ni = mu(i)/Sigma(i,i)-tnu(i); % .. parameters tau_ni and nu_ni
% compute the desired raw moments m0, m1=hmu and m2; m0 is not used
[m0, m1, m2] = feval(lik, y(i), nu_ni/tau_ni, 1/tau_ni);
hmu = m1./m0;
hs2 = m2./m0 - hmu^2; % compute second central moment
ttau_old = ttau(i); % then find the new tilde parameters
ttau(i) = 1/hs2 - tau_ni;
tnu(i) = hmu/hs2 - nu_ni;
ds2 = ttau(i) - ttau_old; % finally rank-1 update Sigma ..
si = Sigma(:,i);
Sigma = Sigma - ds2/(1+ds2*si(i))*si*si'; % takes 70% of total time
mu = Sigma*tnu; % .. and recompute mu
end
[Sigma, mu, nlZ, L] = epComputeParams(K, y, ttau, tnu, lik); % recompute
% Sigma & mu since repeated rank-one updates can destroy numerical precision
end
if sweep == max_sweep
disp('Warning: maximum number of sweeps reached in function approxEP')
end
if nlZ < best_nlZ % if best so far ..
best_ttau = ttau; best_tnu = tnu; best_nlZ = nlZ; % .. keep for next call
end
sW = sqrt(ttau); % compute output arguments, L and nlZ are done
alpha = tnu-sW.*solve_chol(L,sW.*(K*tnu));
if nargout > 4 % do we want derivatives?
dnlZ = zeros(size(hyper)); % allocate space for derivatives
F = alpha*alpha'-repmat(sW,1,n).*solve_chol(L,diag(sW));
for j=1:length(hyper)
dK = feval(covfunc{:}, hyper, x, j);
dnlZ(j) = -sum(sum(F.*dK))/2;
end
end
% function to compute the parameters of the Gaussian approximation, Sigma and
% mu, and the negative log marginal likelihood, nlZ, from the current site
% parameters, ttau and tnu. Also returns L (useful for predictions).
function [Sigma, mu, nlZ, L] = epComputeParams(K, y, ttau, tnu, lik)
n = length(y); % number of training cases
ssi = sqrt(ttau); % compute Sigma and mu
L = chol(eye(n)+ssi*ssi'.*K); % L'*L=B=eye(n)+sW*K*sW
V = L'\(repmat(ssi,1,n).*K);
Sigma = K - V'*V;
mu = Sigma*tnu;
tau_n = 1./diag(Sigma)-ttau; % compute the log marginal likelihood
nu_n = mu./diag(Sigma)-tnu; % vectors of cavity parameters
nlZ = sum(log(diag(L))) - sum(log(feval(lik, y, nu_n./tau_n, 1./tau_n))) ...
-tnu'*Sigma*tnu/2 - nu_n'*((ttau./tau_n.*nu_n-2*tnu)./(ttau+tau_n))/2 ...
+sum(tnu.^2./(tau_n+ttau))/2-sum(log(1+ttau./tau_n))/2;
surr_code/surr_code/gpml/solve_chol.m0000644000175000017500000000173711415646336016733 0ustar iam23iam23% solve_chol - solve linear equations from the Cholesky factorization.
% Solve A*X = B for X, where A is square, symmetric, positive definite. The
% input to the function is R the Cholesky decomposition of A and the matrix B.
% Example: X = solve_chol(chol(A),B);
%
% NOTE: The program code is written in the C language for efficiency and is
% contained in the file solve_chol.c, and should be compiled using matlabs mex
% facility. However, this file also contains a (less efficient) matlab
% implementation, supplied only as a help to people unfamiliar with mex. If
% the C code has been properly compiled and is avaiable, it automatically
% takes precendence over the matlab code in this file.
%
% Copyright (c) 2004, 2005, 2006 by Carl Edward Rasmussen. 2006-02-08.
function x = solve_chol(A, B);
if nargin ~= 2 | nargout > 1
error('Wrong number of arguments.');
end
if size(A,1) ~= size(A,2) | size(A,1) ~= size(B,1)
error('Wrong sizes of matrix arguments.');
end
x = A\(A'\B);
surr_code/surr_code/gpml/covMatern5iso.m0000644000175000017500000000261111415646336017324 0ustar iam23iam23function [A, B] = covMatern5iso(loghyper, x, z)
% Matern covariance function with nu = 5/2 and isotropic distance measure. The
% covariance function is:
%
% k(x^p,x^q) = s2f * (1 + sqrt(5)*d + 5*d/3) * exp(-sqrt(5)*d)
%
% where d is the distance sqrt((x^p-x^q)'*inv(P)*(x^p-x^q)), P is ell times
% the unit matrix and sf2 is the signal variance. The hyperparameters are:
%
% loghyper = [ log(ell)
% log(sqrt(sf2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-24)
if nargin == 0, A = '2'; return; end
persistent K;
[n, D] = size(x);
ell = exp(loghyper(1));
sf2 = exp(2*loghyper(2));
x = sqrt(5)*x/ell;
if nargin == 2 % compute covariance matrix
A = sq_dist(x');
K = sf2*exp(-sqrt(A)).*(1+sqrt(A)+A/3);
A = K;
elseif nargout == 2 % compute test set covariances
z = sqrt(5)*z/ell;
A = sf2;
B = sq_dist(x',z');
B = sf2*exp(-sqrt(B)).*(1+sqrt(B)+B/3);
else % compute derivative matrices
if z == 1
A = sq_dist(x');
A = sf2*(A+sqrt(A).^3).*exp(-sqrt(A))/3;
else
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = sq_dist(x');
K = sf2*exp(-sqrt(K)).*(1+sqrt(K)+K/3);
end
A = 2*K;
clear K;
end
end
surr_code/surr_code/gpml/approximations.m0000644000175000017500000000362011415646336017644 0ustar iam23iam23% approximations: Exact inference for Gaussian process classification is
% intractable, and approximations are necessary. Different approximation
% techniques have been implemented, which all rely on a Gaussian approximation
% to the non-Gaussian posterior:
%
% approxEP the Expectation Propagation (EP) algorithm
% approxLA Laplace's method
%
% which are used by the Gaussian process classification funtion binaryGP.m.
% The interface to the approximation methods is the following:
%
% function [alpha, sW, L, nlZ, dnlZ] = approx..(hyper, covfunc, lik, x, y)
%
% where:
%
% hyper is a column vector of hyperparameters
% covfunc is the name of the covariance function (see covFunctions.m)
% lik is the name of the likelihood function (see likelihoods.m)
% x is a n by D matrix of training inputs
% y is a (column) vector (of size n) of binary +1/-1 targets
% nlZ is the returned value of the negative log marginal likelihood
% dnlZ is a (column) vector of partial derivatives of the negative
% log marginal likelihood wrt each hyperparameter
% alpha is a (sparse or full column vector) containing inv(K)*m, where K
% is the prior covariance matrix and m the approx posterior mean
% sW is a (sparse or full column) vector containing diagonal of sqrt(W)
% the approximate posterior covariance matrix is inv(inv(K)+W)
% L is a (sparse or full) matrix, L = chol(sW*K*sW+eye(n))
%
% Usually, the approximate posterior to be returned admits the form
% N(m=K*alpha, V=inv(inv(K)+W)), where alpha is a vector and W is diagonal;
% if not, then L contains instead -inv(K+inv(W)), and sW is unused.
%
% For more information on the individual approximation methods and their
% implementations, see the separate approx??.m files. See also binaryGP.m
%
% Copyright (c) by Carl Edward Rasmussen and Hannes Nickisch, 2007-06-25.
surr_code/surr_code/gpml/covNNone.m0000644000175000017500000000401411415646336016312 0ustar iam23iam23function [A, B] = covNNone(loghyper, x, z)
% Neural network covariance function with a single parameter for the distance
% measure. The covariance function is parameterized as:
%
% k(x^p,x^q) = sf2 * asin(x^p'*P*x^q / sqrt[(1+x^p'*P*x^p)*(1+x^q'*P*x^q)])
%
% where the x^p and x^q vectors on the right hand side have an added extra bias
% entry with unit value. P is ell^-2 times the unit matrix and sf2 controls the
% signal variance. The hyperparameters are:
%
% loghyper = [ log(ell)
% log(sqrt(sf2) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-24)
if nargin == 0, A = '2'; return; end % report number of parameters
persistent Q K;
[n D] = size(x);
ell = exp(loghyper(1)); em2 = ell^(-2);
sf2 = exp(2*loghyper(2));
x = x/ell;
if nargin == 2 % compute covariance
Q = x*x';
K = (em2+Q)./(sqrt(1+em2+diag(Q))*sqrt(1+em2+diag(Q)'));
A = sf2*asin(K);
elseif nargout == 2 % compute test set covariances
z = z/ell;
A = sf2*asin((em2+sum(z.*z,2))./(1+em2+sum(z.*z,2)));
B = sf2*asin((em2+x*z')./sqrt((1+em2+sum(x.*x,2))*(1+em2+sum(z.*z,2)')));
else % compute derivative matrix
% check for correct dimension of the previously calculated kernel matrix
if any(size(Q)~=n)
Q = x*x';
end
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = (em2+Q)./(sqrt(1+em2+diag(Q))*sqrt(1+em2+diag(Q)'));
end
if z == 1 % first parameter
v = (em2+sum(x.*x,2))./(1+em2+diag(Q));
A = -2*sf2*((em2+Q)./(sqrt(1+em2+diag(Q))*sqrt(1+em2+diag(Q)'))- ...
K.*(repmat(v,1,n)+repmat(v',n,1))/2)./sqrt(1-K.^2);
clear Q;
else % second parameter
A = 2*sf2*asin(K);
clear K;
end
end
surr_code/surr_code/gpml/Copyright0000644000175000017500000000141011415646336016277 0ustar iam23iam23
Software that implements
GAUSSIAN PROCESS REGRESSION AND CLASSIFICATION
Copyright (c) 2005 - 2007 by Carl Edward Rasmussen and Chris Williams
Permission is granted for anyone to copy, use, or modify these programs for
purposes of research or education, provided this copyright notice is retained,
and note is made of any changes that have been made.
These programs are distributed without any warranty, express or
implied. As these programs were written for research purposes only, they
have not been tested to the degree that would be advisable in any
important application. All use of these programs is entirely at the
user's own risk.
The code and associated documentation are avaiable from
http://www.GaussianProcess.org/gpml/code
surr_code/surr_code/gpml/covSum.m0000644000175000017500000000454011415646336016045 0ustar iam23iam23function [A, B] = covSum(covfunc, logtheta, x, z);
% covSum - compose a covariance function as the sum of other covariance
% functions. This function doesn't actually compute very much on its own, it
% merely does some bookkeeping, and calls other covariance functions to do the
% actual work.
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen, 2006-03-20.
for i = 1:length(covfunc) % iterate over covariance functions
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
j(i) = cellstr(feval(f{:}));
end
if nargin == 1, % report number of parameters
A = char(j(1)); for i=2:length(covfunc), A = [A, '+', char(j(i))]; end
return
end
[n, D] = size(x);
v = []; % v vector indicates to which covariance parameters belong
for i = 1:length(covfunc), v = [v repmat(i, 1, eval(char(j(i))))]; end
switch nargin
case 3 % compute covariance matrix
A = zeros(n, n); % allocate space for covariance matrix
for i = 1:length(covfunc) % iteration over summand functions
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
A = A + feval(f{:}, logtheta(v==i), x); % accumulate covariances
end
case 4 % compute derivative matrix or test set covariances
if nargout == 2 % compute test set cavariances
A = zeros(size(z,1),1); B = zeros(size(x,1),size(z,1)); % allocate space
for i = 1:length(covfunc)
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
[AA BB] = feval(f{:}, logtheta(v==i), x, z); % compute test covariances
A = A + AA; B = B + BB; % and accumulate
end
else % compute derivative matrices
i = v(z); % which covariance function
j = sum(v(1:z)==i); % which parameter in that covariance
f = covfunc(i);
if iscell(f{:}), f = f{:}; end % dereference cell array if necessary
A = feval(f{:}, logtheta(v==i), x, j); % compute derivative
end
end
surr_code/surr_code/gpml/covLINone.m0000644000175000017500000000173011415646336016423 0ustar iam23iam23function [A, B] = covLINone(logtheta, x, z);
% Linear covariance function with a single hyperparameter. The covariance
% function is parameterized as:
%
% k(x^p,x^q) = x^p'*inv(P)*x^q + 1./t2;
%
% where the P matrix is t2 times the unit matrix. The second term plays the
% role of the bias. The hyperparameter is:
%
% logtheta = [ log(sqrt(t2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-27)
if nargin == 0, A = '1'; return; end % report number of parameters
it2 = exp(-2*logtheta); % t2 inverse
if nargin == 2 % compute covariance
A = it2*(1+x*x');
elseif nargout == 2 % compute test set covariances
A = it2*(1+sum(z.*z,2));
B = it2*(1+x*z');
else % compute derivative matrix
A = -2*it2*(1+x*x');
end
surr_code/surr_code/gpml/logistic.m0000644000175000017500000001054511415646336016410 0ustar iam23iam23function [out1, out2, out3, out4] = logistic(y, f, var)
% logistic - logistic likelihood function. The expression for the likelihood is
% logistic(t) = 1./(1+exp(-t)).
%
% Three modes are provided, for computing likelihoods, derivatives and moments
% respectively, see likelihoods.m for the details. In general, care is taken
% to avoid numerical issues when the arguments are extreme. The moments
% \int f^k cumGauss(y,f) N(f|mu,var) df are calculated using an approximation
% to the cumulative Gaussian based on a mixture of 5 cumulative Gaussian
% functions (or alternatively using Gauss-Hermite quadrature, which may be less
% accurate).
%
% Copyright (c) 2007 Carl Edward Rasmussen and Hannes Nickisch, 2007-07-25.
if nargin>1, y=sign(y); end % allow only +/- 1 as values
if nargin == 2 % (log) likelihood evaluation
if numel(y)>0, yf = y.*f; else yf = f; end % product of latents and labels
out1 = 1./(1+exp(-yf)); % likelihood
if nargout>1
out2 = yf;
ok = -351 % dlp - first derivatives
s = min(0,f);
p = exp(s)./(exp(s)+exp(s-f)); % p = 1./(1+exp(-f))
out2 = (y+1)/2-p; % dlp, derivative of log likelihood
if nargout>2 % d2lp, 2nd derivative of log likelihood
out3 = -exp(2*s-f)./(exp(s)+exp(s-f)).^2;
if nargout>3 % d3lp, 3rd derivative of log likelihood
out4 = 2*out3.*(0.5-p);
end
end
end
else % compute moments
mu = f; % 2nd argument is the mean of a Gaussian
if numel(y)==0, y=ones(size(mu)); end % if empty, assume y=1
% Two methods of integration are possible; the latter is more accurate
% [out1,out2,out3] = gauherint(y, mu, var);
[out1,out2,out3] = erfint(y, mu, var);
end
else
error('No valid input provided.')
end
% The gauherint function approximates "\int t^k logistic(y t) N(t|mu,var)dt" by
% means of Gaussian Hermite Quadrature. A call to gauher.m is made.
function [m0,m1,m2] = gauherint(y, mu, var)
N = 20; [f,w] = gauher(N); % 20 yields precalculated weights
sz = size(mu);
f0 = sqrt(var(:))*f'+repmat(mu(:),[1,N]); % center values of f
sig = logistic( repmat(y(:),[1,N]), f0 ); % calculate the likelihood values
m0 = reshape(sig*w, sz); % zeroth moment
if nargout>1 % first moment
m1 = reshape(f0.*sig*w, sz);
if nargout>2, m2 = reshape(f0.*f0.*sig*w, sz); end % second moment
end
% The erfint function approximates "\int t^k logistic(y t) N(t|mu,s2) dt" by
% setting:
% logistic(t) \approx 1/2 + \sum_{i=1}^5 (c_i/2) erf(lambda_i t)
% The integrals \int t^k erf(t) N(t|mu,s2) dt can be done analytically.
%
% The inputs y, mu and var have to be column vectors of equal lengths.
function [m0,m1,m2] = erfint(y, mu, s2)
l = [0.44 0.41 0.40 0.39 0.36]; % approximation coefficients lambda_i
c = [1.146480988574439e+02; -1.508871030070582e+03; 2.676085036831241e+03;
-1.356294962039222e+03; 7.543285642111850e+01 ];
S2 = 2*s2.*(y.^2)*(l.^2) + 1; % zeroth moment
S = sqrt( S2 );
Z = mu.*y*l./S;
M0 = erf(Z);
m0 = ( 1 + M0*c )/2;
if nargout>1 % first moment
NormZ = exp(-Z.^2)/sqrt(2*pi);
M0mu = M0.*repmat(mu,[1,5]);
M1 = (2*sqrt(2)*y.*s2)*l.*NormZ./S + M0mu;
m1 = ( mu + M1*c )/2;
if nargout>2 % second moment
M2 = repmat(2*mu,[1,5]).*(1+s2.*y.^2*(l.^2)).*(M1-M0mu)./S2 ...
+ repmat(s2+mu.^2,[1,5]).*M0;
m2 = ( mu.^2 + s2 + M2*c )/2;
end
end
surr_code/surr_code/gpml/covFunctions.m0000644000175000017500000001005011415646336017242 0ustar iam23iam23% covariance functions to be use by Gaussian process functions. There are two
% different kinds of covariance functions: simple and composite:
%
% simple covariance functions:
%
% covConst.m - covariance for constant functions
% covLINard.m - linear covariance function with ard
% covLINone.m - linear covariance function
% covMatern3iso.m - Matern covariance function with nu=3/2
% covMatern5iso.m - Matern covariance function with nu=5/2
% covNNone.m - neural network covariance function
% covNoise.m - independent covariance function (ie white noise)
% covPeriodic.m - covariance for smooth periodic function with unit period
% covRQard.m - rational quadratic covariance function with ard
% covRQiso.m - isotropic rational quadratic covariance function
% covSEard.m - squared exponential covariance function with ard
% covSEiso.m - isotropic squared exponential covariance function
%
% composite covariance functions (see explanation at the bottom):
%
% covProd - products of covariance functions
% covSum - sums of covariance functions
%
% Naming convention: all covariance functions start with "cov". A trailing
% "iso" means isotropic, "ard" means Automatic Relevance Determination, and
% "one" means that the distance measure is parameterized by a single parameter.
%
% The covariance functions are written according to a special convention where
% the exact behaviour depends on the number of input and output arguments
% passed to the function. If you want to add new covariance functions, you
% should follow this convention if you want them to work with the functions
% gpr, binaryEPGP and binaryLaplaceGP. There are four different ways of calling
% the covariance functions:
%
% 1) With no input arguments:
%
% p = covNAME
%
% The covariance function returns a string telling how many hyperparameters it
% expects, using the convention that "D" is the dimension of the input space.
% For example, calling "covRQard" returns the string '(D+2)'.
%
% 2) With two input arguments:
%
% K = covNAME(logtheta, x)
%
% The function computes and returns the covariance matrix where logtheta are
% the log og the hyperparameters and x is an n by D matrix of cases, where
% D is the dimension of the input space. The returned covariance matrix is of
% size n by n.
%
% 3) With three input arguments and two output arguments:
%
% [v, B] = covNAME(loghyper, x, z)
%
% The function computes test set covariances; v is a vector of self covariances
% for the test cases in z (of length nn) and B is a (n by nn) matrix of cross
% covariances between training cases x and test cases z.
%
% 4) With three input arguments and a single output:
%
% D = covNAME(logtheta, x, z)
%
% The function computes and returns the n by n matrix of partial derivatives
% of the training set covariance matrix with respect to logtheta(z), ie with
% respect to the log of hyperparameter number z.
%
% The functions may retain a local copy of the covariance matrix for computing
% derivatives, which is cleared as the last derivative is returned.
%
% About the specification of simple and composite covariance functions to be
% used by the Gaussian process functions gpr, binaryEPGP and binaryLaplaceGP:
% Covariance functions can be specified in two ways: either as a string
% containing the name of the covariance function or using a cell array. For
% example:
%
% covfunc = 'covRQard';
% covfunc = {'covRQard'};
%
% are both supported. Only the second form using the cell array can be used
% for specifying composite covariance functions, made up of several
% contributions. For example:
%
% covfunc = {'covSum',{'covRQiso','covSEard','covNoise'}};
%
% specifies a covariance function which is the sum of three contributions. To
% find out how many hyperparameters this covariance function requires, we do:
%
% feval(covfunc{:})
%
% which returns the string '3+(D+1)+1' (ie the 'covRQiso' contribution uses
% 3 parameters, the 'covSEard' uses D+1 and 'covNoise' a single parameter).
%
% (C) copyright 2006, Carl Edward Rasmussen, 2006-04-07.
surr_code/surr_code/gpml/sq_dist.m0000644000175000017500000000421211415646336016233 0ustar iam23iam23% sq_dist - a function to compute a matrix of all pairwise squared distances
% between two sets of vectors, stored in the columns of the two matrices, a
% (of size D by n) and b (of size D by m). If only a single argument is given
% or the second matrix is empty, the missing matrix is taken to be identical
% to the first.
%
% Special functionality: If an optional third matrix argument Q is given, it
% must be of size n by m, and in this case a vector of the traces of the
% product of Q' and the coordinatewise squared distances is returned.
%
% NOTE: The program code is written in the C language for efficiency and is
% contained in the file sq_dist.c, and should be compiled using matlabs mex
% facility. However, this file also contains a (less efficient) matlab
% implementation, supplied only as a help to people unfamiliar with mex. If
% the C code has been properly compiled and is avaiable, it automatically
% takes precendence over the matlab code in this file.
%
% Usage: C = sq_dist(a, b)
% or: C = sq_dist(a) or equiv.: C = sq_dist(a, [])
% or: c = sq_dist(a, b, Q)
% where the b matrix may be empty.
%
% where a is of size D by n, b is of size D by m (or empty), C and Q are of
% size n by m and c is of size D by 1.
%
% Copyright (c) 2003, 2004, 2005 and 2006 Carl Edward Rasmussen. 2006-03-09.
function C = sq_dist(a, b, Q);
if nargin < 1 | nargin > 3 | nargout > 1
error('Wrong number of arguments.');
end
if nargin == 1 | isempty(b) % input arguments are taken to be
b = a; % identical if b is missing or empty
end
[D, n] = size(a);
[d, m] = size(b);
if d ~= D
error('Error: column lengths must agree.');
end
if nargin < 3
C = zeros(n,m);
for d = 1:D
C = C + (repmat(b(d,:), n, 1) - repmat(a(d,:)', 1, m)).^2;
end
% C = repmat(sum(a.*a)',1,m)+repmat(sum(b.*b),n,1)-2*a'*b could be used to
% replace the 3 lines above; it would be faster, but numerically less stable.
else
if [n m] == size(Q)
C = zeros(D,1);
for d = 1:D
C(d) = sum(sum((repmat(b(d,:), n, 1) - repmat(a(d,:)', 1, m)).^2.*Q));
end
else
error('Third argument has wrong size.');
end
end
surr_code/surr_code/gpml/covSEard.m0000644000175000017500000000326111415646336016276 0ustar iam23iam23function [A, B] = covSEard(loghyper, x, z)
% Squared Exponential covariance function with Automatic Relevance Detemination
% (ARD) distance measure. The covariance function is parameterized as:
%
% k(x^p,x^q) = sf2 * exp(-(x^p - x^q)'*inv(P)*(x^p - x^q)/2)
%
% where the P matrix is diagonal with ARD parameters ell_1^2,...,ell_D^2, where
% D is the dimension of the input space and sf2 is the signal variance. The
% hyperparameters are:
%
% loghyper = [ log(ell_1)
% log(ell_2)
% .
% log(ell_D)
% log(sqrt(sf2)) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-03-24)
if nargin == 0, A = '(D+1)'; return; end % report number of parameters
persistent K;
[n D] = size(x);
ell = exp(loghyper(1:D)); % characteristic length scale
sf2 = exp(2*loghyper(D+1)); % signal variance
if nargin == 2
K = sf2*exp(-sq_dist(diag(1./ell)*x')/2);
A = K;
elseif nargout == 2 % compute test set covariances
A = sf2*ones(size(z,1),1);
B = sf2*exp(-sq_dist(diag(1./ell)*x',diag(1./ell)*z')/2);
else % compute derivative matrix
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = sf2*exp(-sq_dist(diag(1./ell)*x')/2);
end
if z <= D % length scale parameters
A = K.*sq_dist(x(:,z)'/ell(z));
else % magnitude parameter
A = 2*K;
clear K;
end
end
surr_code/surr_code/gpml/gauher.m0000644000175000017500000000427311415646336016047 0ustar iam23iam23% compute abscissas and weight factors for Gaussian-Hermite quadrature
%
% CALL: [x,w]=gauher(N)
%
% x = base points (abscissas)
% w = weight factors
% N = number of base points (abscissas) (integrates a (2N-1)th order
% polynomial exactly)
%
% p(x)=exp(-x^2/2)/sqrt(2*pi), a =-Inf, b = Inf
%
% The Gaussian Quadrature integrates a (2n-1)th order
% polynomial exactly and the integral is of the form
% b N
% Int ( p(x)* F(x) ) dx = Sum ( w_j* F( x_j ) )
% a j=1
%
% this procedure uses the coefficients a(j), b(j) of the
% recurrence relation
%
% b p (x) = (x - a ) p (x) - b p (x)
% j j j j-1 j-1 j-2
%
% for the various classical (normalized) orthogonal polynomials,
% and the zero-th moment
%
% 1 = integral w(x) dx
%
% of the given polynomial's weight function w(x). Since the
% polynomials are orthonormalized, the tridiagonal matrix is
% guaranteed to be symmetric.
function [x,w]=gauher(N)
if N==20 % return precalculated values
x=[ -7.619048541679757;-6.510590157013656;-5.578738805893203;
-4.734581334046057;-3.943967350657318;-3.18901481655339 ;
-2.458663611172367;-1.745247320814127;-1.042945348802751;
-0.346964157081356; 0.346964157081356; 1.042945348802751;
1.745247320814127; 2.458663611172367; 3.18901481655339 ;
3.943967350657316; 4.734581334046057; 5.578738805893202;
6.510590157013653; 7.619048541679757];
w=[ 0.000000000000126; 0.000000000248206; 0.000000061274903;
0.00000440212109 ; 0.000128826279962; 0.00183010313108 ;
0.013997837447101; 0.061506372063977; 0.161739333984 ;
0.260793063449555; 0.260793063449555; 0.161739333984 ;
0.061506372063977; 0.013997837447101; 0.00183010313108 ;
0.000128826279962; 0.00000440212109 ; 0.000000061274903;
0.000000000248206; 0.000000000000126 ];
else
b = sqrt( (1:N-1)/2 )';
[V,D] = eig( diag(b,1) + diag(b,-1) );
w = V(1,:)'.^2;
x = sqrt(2)*diag(D);
endsurr_code/surr_code/gpml/covRQard.m0000644000175000017500000000342311415646336016311 0ustar iam23iam23function [A, B] = covRQard(logtheta, x, z)
% Rational Quadratic covariance function with Automatic Relevance Determination
% (ARD) distance measure. The covariance function is parameterized as:
%
% k(x^p,x^q) = sf2 * [1 + (x^p - x^q)'*inv(P)*(x^p - x^q)/(2*alpha)]^(-alpha)
%
% where the P matrix is diagonal with ARD parameters ell_1^2,...,ell_D^2, where
% D is the dimension of the input space, sf2 is the signal variance and alpha
% is the shape parameter for the RQ covariance. The hyperparameters are:
%
% loghyper = [ log(ell_1)
% log(ell_2)
% .
% log(ell_D)
% log(sqrt(sf2))
% log(alpha) ]
%
% For more help on design of covariance functions, try "help covFunctions".
%
% (C) Copyright 2006 by Carl Edward Rasmussen (2006-09-08)
if nargin == 0, A = '(D+2)'; return; end
persistent K;
[n D] = size(x);
ell = exp(loghyper(1:D));
sf2 = exp(2*loghyper(D+1));
alpha = exp(loghyper(D+2));
if nargin == 2
K = (1+0.5*sq_dist(diag(1./ell)*x')/alpha);
A = sf2*(K.^(-alpha));
elseif nargout == 2 % compute test set covariances
A = sf2*ones(size(z,1),1);
B = sf2*((1+0.5*sq_dist(diag(1./ell)*x',diag(1./ell)*z')/alpha).^(-alpha));
else % compute derivative matrix
% check for correct dimension of the previously calculated kernel matrix
if any(size(K)~=n)
K = (1+0.5*sq_dist(diag(1./ell)*x')/alpha);
end
if z <= D % length scale parameters
A = sf2*K.^(-alpha-1).*sq_dist(x(:,z)'/ell(z));
elseif z == D+1 % magnitude parameter
A = 2*sf2*(K.^(-alpha));
else
A = sf2*K.^(-alpha).*(0.5*sq_dist(diag(1./ell)*x')./K - alpha*log(K));
clear K;
end
end
surr_code/surr_code/gpml/sq_dist.c0000644000175000017500000000361311415646336016225 0ustar iam23iam23/* sq_dist - a mex function to compute a matrix of all pairwise squared
distances between two sets of vectors, stored in the columns of the two
matrices that are arguments to the function. The length of the vectors must
agree. If only a single argument is given, the missing argument is taken to
be identical to the first. If an optional third matrix argument Q is given,
it must be of the same size as the output, but in this case a vector of the
traces of the product of Q and the coordinatewise squared distances is
returned.
Copyright (c) 2003, 2004 Carl Edward Rasmussen. 2003-04-22. */
#include "mex.h"
#include
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
double *a, *b, *C, *Q, z, t;
int D, n, m, i, j, k;
if (nrhs < 1 || nrhs > 3 || nlhs > 1)
mexErrMsgTxt("Usage: C = sq_dist(a,b)\n or: C = sq_dist(a)\n or: c = sq_dist(a,b,Q)\nwhere the b matrix may be empty.");
a = mxGetPr(prhs[0]);
m = mxGetN(prhs[0]);
D = mxGetM(prhs[0]);
if (nrhs == 1 || mxIsEmpty(prhs[1])) {
b = a;
n = m;
} else {
b = mxGetPr(prhs[1]);
n = mxGetN(prhs[1]);
if (D != mxGetM(prhs[1]))
mexErrMsgTxt("Error: column lengths must agree");
}
if (nrhs < 3) {
plhs[0] = mxCreateDoubleMatrix(m, n, mxREAL);
C = mxGetPr(plhs[0]);
for (i=0; i 0
ff_samples(ii,:) = ff';
theta_samples(ii,:) = theta';
cond_llh_samples(ii) = cur_llh;
comp_llh_samples(ii) = cur_llh - 0.5*ff'*solve_chol(chol_cov, ff) ...
- sum(log(diag(chol_cov))) - 0.5*N*log(2*pi);
end
end
elapsed = toc;
results.ff_samples = ff_samples;
results.theta_samples = theta_samples;
results.cond_llh_samples = cond_llh_samples;
results.comp_llh_samples = comp_llh_samples;
results.num_llh_calls = num_llh_calls;
results.num_cov_calls = num_cov_calls;
results.elapsed = elapsed;
results.eff_cond_llh_samples = effective_size_rcoda(cond_llh_samples(:));
results.eff_comp_llh_samples = effective_size_rcoda(comp_llh_samples(:));
fprintf('%03d/%3d] CondLLH Eff Samp: %0.2f CompLLH Eff Samp: %0.2f %0.2f secs\n\n', ...
run, runs, results.eff_cond_llh_samples, results.eff_comp_llh_samples, elapsed);
surr_code/surr_code/experiment_toolbox/0000755000175000017500000000000011415646325017375 5ustar iam23iam23surr_code/surr_code/experiment_toolbox/experiment_run.m0000644000175000017500000002333711415646325022627 0ustar iam23iam23function success = experiment_run(name, num_runs, fn, pass_run_number);
%EXPERIMENT_RUN save results from multiple function runs. Can be run concurrently.
%
% success = experiment_run(name, num_runs, fn, [pass_run_number=false]);
%
% You provide a function that returns a structure and that you want to be run
% num_runs times. In the end you will get the results in a structure array
% stored in a .mat file. Running this m-file once will achieve that, the runs
% will be run one after the other. You can also run this m-file many times
% concurrently with the same arguments and will get the same results, but
% faster.
%
% The individual runs are initially stored in separate .mat files containing a
% single structure. When all runs have been completed, the results are gathered
% together in a single .mat file holding a structure array. Use experiment_load
% to access the data so that you don't need to know the details of how this
% works.
%
% If later you want more runs, just run this again with num_runs set to a bigger
% number. After all the extra runs have been done, they will be appended to
% the previously gathered results.
%
% WARNING: Random seeds are set using the run number using the 'classic'
% (deprecated) method for doing this in Matlab.
%
% Care is taken to ensure that concurrent Matlab processes don't end up running
% the same job, or getting confused into missing out a job. This relies on known
% atomic file operations on POSIX systems (even over NFS). NOTE these safeguards
% will not work on Windows: YOU CANNOT SAFELY RUN CONCURRENT INSTANCES OF THIS
% FUNCTION ON WINDOWS.
%
% DISCLAIMER: I HAVEN'T ACTUALLY TESTED THIS ON WINDOWS AT ALL, it may well fall
% flat on its face.
%
% Inputs:
% name string arbitrary tag for this experiment
% num_runs 1x1 total number of runs that should be done. Some may be
% done by other concurrently running instances.
% fn @fn A function that returns a struct to be saved.
% A runtime field is added to this struct if it doesn't
% already exist.
% pass_run_number bool If false (default), fn() takes no arguments.
% If true, fn() takes the run number in 1:num_runs
% as an argument.
%
% Outputs:
% success bool did the final gathering work? Could be a failure
% because another instance locked the gathering
% operation first.
%
% See also: EXPERIMENT_LOAD
% Iain Murray, January 2009, October 2009
if ~exist('pass_run_number', 'var')
pass_run_number = false;
end
opts = {};
experiment = experiment_setup(name, num_runs);
while experiment.needs_runs
if pass_run_number
opts = {experiment.run};
end
% Ensures experiments are different and reproducible
% Can always override this in fn(), but this seems like a sensible default.
rand('twister', experiment.run); % best available in 7.4 (not fastest though)
randn('state', experiment.run); % didn't have twister option in 7.4
try
tic
% This is where the experiment is actually run:
result = fn(opts{:});
runtime = toc;
if ~isfield(result, 'runtime')
result.runtime = runtime;
end
experiment = experiment_record(experiment, result);
catch
experiment = experiment_cleanup(experiment);
rethrow(lasterror);
end
end
success = experiment_gather(experiment);
function experiment = experiment_setup(experiment_name, num_runs)
experiment.name = experiment_name;
experiment.num_runs = num_runs;
base = experiment_base();
if ~exist(base, 'dir')
success = mkdir(base);
% test existence rather than success, in case another process created the
% directory just before us:
assert(exist(base, 'dir') ~= 0);
end
% It may be that we gathered a "complete" experiment before with fewer num_runs,
% and now we are asking for more runs. So find out how many gathered runs (if
% any) have already been done and don't redo those.
final_mat = experiment_mat(experiment_name);
if exist(final_mat, 'file')
ws = load(final_mat, 'num_runs');
next_run = ws.num_runs + 1;
else
next_run = 1;
end
% Find and lock the next experiment that needs doing and isn't locked
for ii = next_run:num_runs
% We may find that next_run has changed, and I don't want to fiddle with ii within a loop
if ii < next_run
continue
end
if run_lock(experiment_name, ii)
% Check the run still needs doing (it could be we got the lock just
% because this experiment has just been gathered since we last looked at
% final_mat)
if exist(final_mat, 'file')
ws = load(final_mat, 'num_runs');
next_run = ws.num_runs + 1;
if ii < next_run
continue;
end
end
% Set up the needed run and get out of here.
experiment.needs_runs = true;
experiment.run = ii;
try
% Not supported in Octave at time of writing
experiment.cleaner_handle = myCleanup(@() run_unlock(experiment_name, ii));
end
return
end
end
experiment.needs_runs = false;
function name = experiment_lock(varargin)
name = [experiment_mat(varargin{:}), '.lock'];
function success = run_lock(name, run)
mat_file = experiment_mat(name, run);
lock_file = experiment_lock(name, run);
if exist(mat_file) || exist(lock_file)
success = false;
return;
end
fail = my_lock(lock_file);
success = ~fail;
function success = run_unlock(name, run)
lock_file = experiment_lock(name, run);
fail = mydelete(lock_file);
success = ~fail;
function experiment = experiment_cleanup(experiment)
if isfield(experiment, 'cleaner_handle')
experiment.cleaner_handle = 0;
else
run_unlock(experiment.name, experiment.run);
end
function experiment = experiment_record(experiment, result_struct)
mat_file = experiment_mat(experiment.name, experiment.run);
if ~exist('octave_config_info', 'builtin')
save(mat_file, '-struct', 'result_struct');
else
% TODO identify an Octave version where -struct is supported and check for
% >= that, rather than assuming all Octave versions don't have it.
save_struct(mat_file, result_struct);
end
experiment = experiment_cleanup(experiment);
experiment = experiment_setup(experiment.name, experiment.num_runs);
function success = experiment_gather(experiment)
success = 0;
num_runs = experiment.num_runs;
for ii = 1:num_runs
if exist(experiment_lock(experiment.name, ii), 'file')
warning('Failed to gather results as exeriment(s) still running');
return
end
end
final_mat = experiment_mat(experiment.name);
final_lock = experiment_lock(experiment.name);
fail = my_lock(final_lock);
if fail
warning('Failed to gather results as another instance seems to be doing it.');
return;
end
if exist(final_mat, 'file');
ws = load(final_mat);
results = ws.results;
first_gather = length(ws.results) + 1;
else
first_gather = 1;
end
if first_gather <= num_runs
for ii = first_gather:num_runs
mat = experiment_mat(experiment.name, ii);
if ~exist(mat, 'file')
warning(['Failed to gather due to missing result file: ', mat]);
mydelete(final_lock);
return
end
ws = load(mat);
results(ii) = ws;
end
tmp_name = [final_mat, '.tmp'];
save('-v7', tmp_name, 'results', 'num_runs');
if ispc
movefile(tmp_name, final_mat);
else
% I haven't tested movefile's properties on Unix, so I'm sticking with this:
[fail, dummy] = my_system(['mv "', tmp_name, '" "', final_mat, '"']);
end
% Feeling daring: delete files that have been gathered.
% Hopefully an error would have occurred if they weren't saved properly.
for ii = first_gather:num_runs
mat = experiment_mat(experiment.name, ii);
mydelete(mat);
end
end
% Remove lock on final_mat
mydelete(final_lock);
success = 1;
function fail = mydelete(filename)
if ispc
delete(filename);
fail = false; % again, the Windows implementation is flaky and untested.
else
% Moving is an atomic operation on POSIX systems,
if exist('octave_config_info', 'builtin')
% See my_system() for why there's the 2>/dev/null here and not for Matlab
[fail, dummy] = system(['mv "', filename, '" "', filename, '.delme" 2>/dev/null && rm "', filename, '.delme" 2>/dev/null']);
else
[fail, dummy] = system(['mv "', filename, '" "', filename, '.delme" && rm "', filename, '.delme"']);
end
end
function varargout = my_system(str)
% Octave splurges standard error from system() commands onto the screen rather
% than returning it as part of the output. As Octave calls '/bin/sh' to run
% system() commands we can throw away stderr by adding '2>/dev/null'. (Note I
% would like to keep the output with '2>&1', but that relies on /bin/sh being
% bash and breaks on BSD and Debian/Ubuntu systems.) We can't add the
% redirection command in Matlab, because it uses tcsh to execute commands, which
% would choke on the redirection syntax.
if exist('octave_config_info', 'builtin')
cmd_end = ' 2>/dev/null';
else
cmd_end = '';
end
varargout = cell(1, nargout);
[varargout{:}] = system([str, cmd_end]);
function fail = my_lock(lock_file)
if ispc
% This isn't a working locking mechanism, I would need to identify an atomic
% operation on Windows and have a Windows machine with Matlab and Octave to test it.
% (I haven't any existing locking code on the file exchange.)
fail = exist(lock_file, 'file');
if ~fail
fid = fopen(lock_file, 'w');
fclose(fid);
end
else
[fail, dummy] = my_system(['ln -s /dev/null "', lock_file, '"']);
end
surr_code/surr_code/experiment_toolbox/private/0000755000175000017500000000000011415646325021047 5ustar iam23iam23surr_code/surr_code/experiment_toolbox/private/experiment_mat.m0000644000175000017500000000033511415646325024247 0ustar iam23iam23function mat_file = experiment_mat(name, run_number)
if nargin == 1
mat_file = sprintf('%s%s.mat', experiment_base(), name);
else
mat_file = sprintf('%s%s_run%03d.mat', experiment_base(), name, run_number);
end
surr_code/surr_code/experiment_toolbox/private/experiment_base.m0000644000175000017500000000011711415646325024376 0ustar iam23iam23function base = experiment_base()
base = [fullfile(pwd, 'results'), filesep];
surr_code/surr_code/experiment_toolbox/testing/0000755000175000017500000000000011415646325021052 5ustar iam23iam23surr_code/surr_code/experiment_toolbox/testing/test_locking_experiment.m0000644000175000017500000000113211415646325026152 0ustar iam23iam23function test_locking_experiment()
% The code should lock the creation of experiments so that duplicates aren't
% run, even if this m-file is being run many times across machines using NFS to
% save results.
% This allows me to start the runs at a similar time by touching a marker file "GO"
while ~exist('GO','file')
pause(0.01);
end
experiment_name = 'lockingexperiment';
for num_runs = [6 12];
success = experiment_run(experiment_name, num_runs, @test_fn, true);
pause(0.01);
end
function results = test_fn(ii)
aa = ii;
bb = aa*2;
pause(.5);
results = struct('aa', aa, 'bb', bb);
surr_code/surr_code/experiment_toolbox/testing/test_experiment_run.m0000644000175000017500000000104711415646325025335 0ustar iam23iam23function test_experiment_run()
experiment_name = 'toyexperiment';
for num_runs = [6 12];
success = experiment_run(experiment_name, num_runs, @test_fn, true);
end
results = experiment_load(experiment_name);
testequal([results.aa], 1:12);
testequal([results.bb], 2:2:24);
num_runs = experiment_load(experiment_name, 0);
testequal(num_runs, 12);
second = experiment_load(experiment_name, 2);
testequal(second.aa, 2);
testequal(second.bb, 4);
function results = test_fn(ii)
aa = ii;
bb = aa*2;
pause(.2);
results = struct('aa', aa, 'bb', bb);
surr_code/surr_code/experiment_toolbox/experiment_load.m0000644000175000017500000000220111415646325022725 0ustar iam23iam23function results = experiment_load(name, run);
%EXPERIMENT_LOAD load results stored by EXPERIMENT_RUN.
%
% results = experiment_load(name[, run]);
%
% Loads results created with experiment_run. An error of some sort will result
% from trying to read runs that haven't completed yet, or from requesting the
% number of runs stored if no results have been gathered together yet.
%
% Inputs:
% name string Same name used as in experiment run
% run 1x1 If missing, return a structure array with all results
% If >=1 return structure with just results from that run
% If 0 return the number of runs stored
%
% Outputs:
% results structure (array)
%
% See also: EXPERIMENT_RUN
% Iain Murray, January 2009
final_mat = experiment_mat(name);
if ~exist('run', 'var')
ws = load(final_mat);
results = ws.results;
elseif run == 0
ws = load(final_mat, 'num_runs');
results = ws.num_runs;
else
mat = experiment_mat(name, run);
try
ws = load(mat);
results = ws.results;
catch
ws = load(final_mat);
results = ws.results(run);
end
end
surr_code/surr_code/experiment_toolbox/@myCleanup/0000755000175000017500000000000011415646325021432 5ustar iam23iam23surr_code/surr_code/experiment_toolbox/@myCleanup/myCleanup.m0000644000175000017500000000174011415646325023547 0ustar iam23iam23% Prior to Matlab 6.5 it was possible to catch ctrl-c interrupts using
% try..catch blocks. Mathworks deliberately broke this and, according to their
% online help, didn't provide a workaround. Last time I looked Octave did trap
% ctrl-c. If not, look into "unwind_protect", as I don't think it will
% understand this class.
%
% Finally, Matlab 7.5 provides a documented solution called "onCleanup".
% I learned about it from here: http://blogs.mathworks.com/loren/2008/03/10/keeping-things-tidy/
% This file is a version of that. I worked out how onCleanup works from online
% sources, but haven't seen the actual implementation. I'll probably get access
% to Matlab 7.5 soon, but made this version because I didn't want to introduce a
% dependency in my code.
classdef myCleanup < handle
properties
fn = 0;
end
methods
function obj = myCleanup(fn)
obj.fn = fn;
end
function delete(obj)
obj.fn();
end
end
end
surr_code/surr_code/experiment_toolbox/save_struct.m0000644000175000017500000000302011415646325022110 0ustar iam23iam23function save_struct(filename, strct)
%SAVE_STRUCT hack for old Octave not supporting save(filename, '-struct', strct)
%
% save_struct(filename, strct)
%
% Saves a Matlab V7 .mat file containing the elements of strct as top-level
% variables.
%
% Inputs:
% filename string
% strct structure
% Iain Murray, January 2009
% The version of Octave I have installed doesn't support '-struct' in save
% commands. I found a patch online, but it seems likely that it will take a
% while for this to filter through to people's desktops. This is a quick work-around.
% SAVE is quite limited. The names saved into the .mat file are the same as in
% the local scope. a) I have to put the strct variables into the local scope. b)
% I have to hope they don't clash with other variables I need. This is why I use
% such ugly variable names here. I can think of convoluted ways to allow fields
% in strct to begin with 'yHjqioPz_', but I don't think it will ever be a
% problem in my code, and the real solution is to upgrade Octave.
if ~sum(filename == '.')
filename = [filename, '.mat'];
end
yHjqioPz_filename = filename;
yHjqioPz_args = fieldnames(strct);
yHjqioPz_strct = strct;
% Otherwise bad things happen:
assert(~ismember('yHjqioPz_args', yHjqioPz_args));
assert(~ismember('yHjqioPz_filename', yHjqioPz_args));
assert(~ismember('yHjqioPz_field', yHjqioPz_args));
for yHjqioPz_field = yHjqioPz_args(:)'
eval([yHjqioPz_field{1}, ' = yHjqioPz_strct.', yHjqioPz_field{1}, ';']);
end
save('-v7', yHjqioPz_filename, yHjqioPz_args{:});
surr_code/surr_code/setup_gaussian.m0000644000175000017500000000152611415646336016665 0ustar iam23iam23function setup = setup_gaussian()
[setup.X setup.Y setup.noise_var] = get_synthetic_data();
setup.runs = 10;
setup.iterations = 5000;
setup.burn = 1000;
setup.ess_iterations = 10;
setup.max_ls = 10.0;
setup.min_ls = 0.01;
setup.print_mod = 1;
jitter = 1e-6;
gpml_covs = {'covSum', {'covSEard', 'covNoise'}};
setup.slice_width = 10;
setup.llh_fn = @gaussian_llh;
setup.cov_fn = @(theta) feval(gpml_covs{:}, [theta ; 0 ; log(jitter)], setup.X);
setup.theta_log_prior = @(theta) log(1.0*all((theta>log(setup.min_ls)) & (thetalog(setup.min_ls)) & (theta setup.max_aux_std) = setup.max_aux_std;
gg = (gg-gp_mean)/gain;
end
function [std gg] = aux_taylor(theta, K, gain, gp_mean)
[std gg] = poiss_aux_fixed(setup.Y);
std = std/gain;
std(std > setup.max_aux_std) = setup.max_aux_std;
gg = (gg-gp_mean)/gain;
end
function [gain llh] = update_gain(gain, ff, cur_mean, cur_llh)
% Slice sample
particle = struct('pos', gain, 'ff', ff, 'mean', cur_mean);
particle = gain_slice_fn(particle, -Inf);
particle = slice_sweep(particle, @gain_slice_fn, 1, 0);
gain = particle.pos;
llh = particle.Lpstar;
end
function [new_mean llh] = update_mean(cur_mean, ff, gain, cur_llh)
% Slice sample
particle = struct('pos', cur_mean, 'ff', ff, 'gain', gain);
particle = mean_slice_fn(particle, -Inf);
particle = slice_sweep(particle, @mean_slice_fn, 1, 0);
new_mean = particle.pos;
llh = particle.Lpstar;
end
function pp = gain_slice_fn(pp, Lpstar_min)
gain = pp.pos;
if (gain < setup.min_gain) || (gain > setup.max_gain)
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
pp.Lpstar = redwood_llh(pp.ff, gain, pp.mean);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
end
function pp = mean_slice_fn(pp, Lpstar_min)
new_mean = pp.pos;
if (new_mean < setup.min_gpmean) || (new_mean > setup.max_gpmean)
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
pp.Lpstar = redwood_llh(pp.ff, pp.gain, new_mean);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
end
endsurr_code/surr_code/gen_synthetic.m0000644000175000017500000000131111415646336016466 0ustar iam23iam23% Generate a synthetic data set on the 10d unit hypercube.
clear;
% Parameters.
seed = 0;
num_data = 200;
noise_variance = 0.09;
gain = 1.0;
jitter = 1e-9;
num_dims = 10;
gpml_covs = {'covSum', {'covSEard', 'covNoise'}};
% Fix random seed.
rand('state', seed);
randn('state', seed);
length_scales = sqrt(num_dims)*rand([num_dims 1]);
theta_log = log([ length_scales ; gain ; jitter ]);
% Generate the input points.
data.X = rand([num_data num_dims]);
% Generate the true function.
K = feval(gpml_covs{:}, theta_log, data.X);
U = chol(K);
data.Y = U'*randn([num_data 1]) + sqrt(noise_variance)*randn([num_data 1]);
save('data/synthetic.mat');
surr_code/surr_code/setup_ionosphere.m0000644000175000017500000000344411415646336017227 0ustar iam23iam23function setup = setup_ionosphere()
[setup.train_x setup.train_y setup.test_x setup.test_y] = get_ionosphere_data();
setup.runs = 10;
setup.iterations = 5000;
setup.burn = 1000;
setup.ess_iterations = 10;
setup.max_ls = 10.0;
setup.min_ls = 0.01;
setup.max_gain = 10.0;
setup.min_gain = 0.01;
jitter = 1e-6;
gpml_covs = {'covSum', {'covSEard', 'covNoise'}};
setup.slice_width = 10;
setup.llh_fn = @ionosphere_llh;
setup.cov_fn = @(theta) feval(gpml_covs{:}, [theta ; 0 ; log(jitter)], setup.train_x);
setup.theta_log_prior = @(theta) log(1.0*all((theta>log(setup.min_ls)) & (theta setup.max_gain)
pp.Lpstar = -Inf;
pp.on_slice = false;
return;
end
pp.Lpstar = ionosphere_llh(pp.ff, gain);
pp.on_slice = (pp.Lpstar >= Lpstar_min);
end
function err = train_error(ff)
err = 1.0 - mean((ff > 0) == setup.train_y);
end
end
surr_code/surr_code/get_mine_data.m0000644000175000017500000000240111415646336016404 0ustar iam23iam23function [xx, yy] = get_mine_data(bin_width)
%GET_MINE_DATA
%
% [xx, yy] = get_mine_data(bin_width)
%
% Inputs:
% bin_width 1x1 Optional, Default=50.
% Number of days in each bin (except possibly the last)
%
% The default bin width of 50 gives 811 bins.
%
% Outputs:
% xx 1xN Centres of bins. Time measured in days.
% (Could argue about best definition here. I've picked it so
% that if a bin only contains first day the bin is at '1', if
% it contains the first two days it is at '1.5' and so on.)
% yy 1xN Number of events in each bin
% Iain Murray, October 2009
if ~exist('bin_width', 'var')
bin_width = 50;
end
% Facts from paper (could be derived from data, but I'm using to sanity check):
num_days = 40550;
num_events = 191;
intervals = load('data/mining.dat');
event_days = [1, cumsum(intervals(:)')+1];
assert(event_days(end) == num_days);
edges = [1:bin_width:num_days, num_days+1];
bin_counts = histc(event_days, edges);
assert(sum(bin_counts) == num_events);
% Should have no data at exactly num_days+1, also strip off this cruft:
assert(bin_counts(end) == 0);
bin_counts = bin_counts(1:end-1);
xx = (edges(1:end-1) + (edges(2:end)-1)) / 2;
yy = bin_counts;
surr_code/surr_code/plots/0000755000175000017500000000000011415646336014612 5ustar iam23iam23surr_code/surr_code/logistic_aux.m0000644000175000017500000000302211415646336016316 0ustar iam23iam23function [aux_std, gg] = logistic_aux(labels, log_prior_var, prior_mean)
%LOGISTIC_AUX return effective Gaussian likelihood noise level (and centres)
%
% [aux_std, gg] = logistic_aux(labels, log_prior_var, prior_mean)
%
% Inputs:
% labels Nx1 values are in {-1,+1}
% log_prior_var 1x1 or Nx1
% prior_mean 1x1 or Nx1
%
% Outputs:
% aux_std Nx1
% gg Nx1 (if needed)
% Iain Murray, April 2010
if (nargin > 2) && (prior_mean ~= 0)
error('Non-zero prior means are not implemented yet.');
else
prior_mean = 0;
end
prior_var = exp(log_prior_var);
prior_precision = 1./prior_var;
% The approx that's special to the logistic likelihood:
post_var = prior_var.*(1 - 1./(pi/2 + 4./prior_var));
post_precision = 1./post_var;
mask = (post_precision > prior_precision);
aux_std = zeros(size(mask));
aux_std(mask) = sqrt(1 ./ (post_precision(mask) - msk(prior_precision, mask)));
aux_std(~mask) = Inf;
if numel(aux_std) == 1
aux_std = repmat(aux_std, size(labels));
end
if nargout > 1
mu = sqrt(prior_var./(pi/2 + 4./prior_var));
gg = (aux_std.^2).*(mu.*post_precision - prior_mean.*prior_precision);
% Get rid of infinities, which disappear in sensible limits anyway:
BIG = 1e100;
gg = min(gg, BIG);
gg = labels.*gg;
end
function xx = msk(A, mask)
%MSK msk(A, mask) returns A(mask), or just A if A is a scalar.
%
%This is useful for when A is a scalar standing in for an array with all
%elements equal.
if numel(A) == 1
xx = A;
else
xx = A(mask);
end
__