StatSat Bibliography

[1] John McCarthy and Patrick J. Hayes. Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence 4, pages 463-502. Edinburgh University Press, 1969. reprinted in McC90.
[ bib ]
[2] R.E. Fikes and N.J. Nilsson. STRIPS: a new approach to the application of theorem proving. Artificial Intelligence, 1971.
[ bib ]
[3] Julian Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B(Methodoligical), 36(2):192-236, 1974.
[ bib | http ]
[4] John McCarthy. Formalization of common sense, papers by John McCarthy edited by V. Lifschitz. Ablex, 1990.
[ bib ]
[5] Sun Kim and Hantao Zhang. Modgen: Theorem proving by model generation. In National Conference on Artificial Intelligence, pages 162-167, 1994.
[ bib | .html ]
[6] Leora Morgenstern. The problem with solutions to the frame problem. In Kenneth M. Ford and Zenon Pylyshyn, editors, The Robot's Dilemma Revisited: The Frame Problem in Artificial Intelligence, pages 99-133. Ablex Publishing Co., Norwood, New Jersey, 1996.
[ bib | .html ]
[7] Joakim Gustafsson and Patrick Doherty. Embracing occlusion in specifying the indirect effects of actions. In Principles of Knowledge Representation and Reasoning, pages 87-98, 1996.
[ bib | .html ]
[8] R. M. Shiffrin and M. Steyvers. A model for recognition memory: REM: Retrieving effectively from memory. Psychonomic Bulletin and Review, 4(2):145-166, 1997.
[ bib ]
[9] Richard M. Shiffrin and Mark Steyvers. A model for recognition memory: REM-retrieving effectively from memory. Psychonomic Bulletin & Review, 4(2):145-166, 1997.
[ bib ]
[10] Andrew W. Moore and Mary S. Lee. Cached sufficient statistics for efficient machine learning with large datasets. Journal of Artificial Intelligence Research, 8:67-91, 1998.
[ bib | .html ]
[11] Michael I. Jordan, Zoubin Gharamani, Tommi S. Jakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. In M.I. Jordan, editor, Learning in Graphical Models, pages 105-161. MIT Press, 1998.
[ bib ]
[12] Hiroyuki Matsuda. Physical nature of higher-order mutual information: Intrinsic correlations and frustration. Phys. Rev., E 62:3096–3102, 2000.
[ bib ]
[13] D. E. Diller, P.A. Nobel, and R. M. Shiffrin. An ARC-REM model for accuracy and response time in recognition and cued recall. Journal of Experimental Psychology: Learning, Memory, and Cognition., pages 414-435, 2001. Research Report # 230, IU Cognitive Science Series.
[ bib | .pdf ]
[14] Tatiana Tambouratzis. An artificial neural network satisfiability tester. Int. J. Intell. Syst., 16(12):1357-1375, 2001.
[ bib ]
An artificial neural network tester for the satisfiability problem of propositional calculus is presented. Satisfiability is treated as a constraint satisfaction optimization problem and, contrary to most of the existing satisfiability testers, the expressions are converted into disjunctive normal form before testing. The artificial neural network is based on the principles of harmony theory. Its basic characteristics are the simulated annealing procedure and the harmony function; the latter constitutes a measure of the satisfiability of the expression under the current truth assignment to its variables. The tester is such that: (a) the satisfiability of any expression is determined; (b) a truth assignment to the variables of the expression is output which renders true the greatest possible number of clauses; (c) all the truth assignments which render true the maximum number of clauses can be produced. (c) 2001 John Wiley & Sons, Inc.
[15] Stan Z. Li. Markov Random Field Modeling in Image Analysis. Computer Science Workbench. Springer-Verlag, 2 edition, 2001.
[ bib | .html ]
[16] Bell A.J. Co-information lattice. In 4th International Symposium on Independent Component Analysis and Blind Source Separation, Nara, Japan, 2003.
[ bib | .pdf ]
[17] Ilya Nemenman. Information theory, multivariate dependence, and genetic network inference, 2004.
[ bib | http ]
[18] Murray Shanahan. The frame problem. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. 2004.
[ bib | http ]
[19] P. Domingos and M. Richardson. Markov Logic: a unifying framework for statistical relational learning. In T. Dietterich, L. Getoor, and K. Murphy, editors, Working Notes of the ICML-2004 Workshop on Statistical Relational Learning and Connections to Other Fields (SRL-2004), pages 48-55, Banff, Canada, July 2004.
[ bib ]
[20] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, pages 1-44, 2005. To appear.
[ bib | .pdf ]
Abstract. We propose a simple approach to combining rst-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a rst-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it speci es a ground Markov network containing one feature for each possible grounding of a rst-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are ef ciently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.
[21] Reinhard Blutner. Neural networks, penalty logic and optimality theory. Technical Report PP-2005-01, Amsterdam, 2005.
[ bib | .pdf ]

This file has been generated by bibtex2html 1.75