Kohavi, R. 1995a. A study of cross-validation and bootstrap for accuracy estimation
and model selection. In Proceedings of the Fourteenth International Joint
Conference on Artificial Intelligence, Montreal, Canada. San Francisco: Morgan
Kaufmann, pp. 1137–1143.
———. 1995b. The power of decision tables. In N. Lavrac and S. Wrobel, editors,
Proceedings of the Eighth European Conference on Machine Learning,
Iráklion, Crete, Greece. Berlin: Springer-Verlag, pp. 174–189.
———. 1996. Scaling up the accuracy of Naïve Bayes classifiers: A decision tree
hybrid. In E. Simoudis, J. W. Han, and U. Fayyad, editors, Proceedings of the
Second International Conference on Knowledge Discovery and Data Mining,
Portland, OR. Menlo Park, CA: AAAI Press, pp. 202–207.
Kohavi, R., and G. H. John. 1997. Wrappers for feature subset selection. Artificial
Intelligence 97(1–2):273–324.
Kohavi, R., and C. Kunz. 1997. Option decision trees with majority votes. In D.
Fisher, editor, Proceedings of the Fourteenth International Conference on
Machine Learning, Nashville, TN. San Francisco: Morgan Kaufmann, pp.
161–191.
Kohavi, R., and F. Provost, editors. 1998. Machine learning: Special issue on appli-
cations of machine learning and the knowledge discovery process. Machine
Learning 30(2/3): 127–274.
Kohavi, R., and M. Sahami. 1996. Error-based and entropy-based discretization
of continuous features. In E. Simoudis, J. W. Han, and U. Fayyad, editors,
Proceedings of the Second International Conference on Knowledge Discovery and
Data Mining, Portland, OR. Menlo Park, CA: AAAI Press, pp. 114–119.
Komarek, P., and A. Moore. 2000. A dynamic adaptation of AD trees for efficient
machine learning on large data sets. In P. Langley, editor, Proceedings of the
Seventeenth International Conference on Machine Learning, Stanford, CA. San
Francisco: Morgan Kaufmann, pp. 495–502.
Kononenko, I. 1995. On biases in estimating multivalued attributes. In Proceedings
of the Fourteenth International Joint Conference on Artificial Intelligence,
Montreal, Canada. San Francisco: Morgan Kaufmann, pp. 1034–1040.
Koppel, M., and J. Schler. 2004. Authorship verification as a one-class classification
problem. In R. Greiner and D. Schuurmans, editors, Proceedings of the Twenty-
First International Conference on Machine Learning, Banff, Alberta, Canada.
New York: ACM, pp. 489–495.
Kubat, M., R. C. Holte, and S. Matwin. 1998. Machine learning for the detection of
oil spills in satellite radar images. Machine Learning 30:195–215.
4 9 6
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 496
Kushmerick, N., D. S. Weld, and R. Doorenbos. 1997. Wrapper induction for
information extraction. In Proceedings of the Fifteenth International Joint
Conference on Artificial Intelligence, Nagoya, Japan. San Francisco: Morgan
Kaufmann, pp. 729–735.
Landwehr, N., M. Hall, and E. Frank. 2003. Logistic model trees. In N. Lavrac, D.
Gamberger, L. Todorovski, and H. Blockeel, editors, Proceedings of the Four-
teenth European Conference on Machine Learning, Cavtat-Dubrovnik, Croatia.
Berlin: Springer-Verlag. pp. 241–252.
Langley, P. 1996. Elements of machine learning. San Francisco: Morgan Kaufmann.
Langley, P., and S. Sage. 1994. Induction of selective Bayesian classifiers. In R. L.
de Mantaras and D. Poole, editors, Proceedings of the Tenth Conference on
Uncertainty in Artificial Intelligence, Seattle, WA. San Francisco: Morgan
Kaufmann, pp. 399–406.
Langley, P., and H. A. Simon. 1995. Applications of machine learning and rule
induction. Communications of the ACM 38(11):55–64.
Langley, P., W. Iba, and K. Thompson. 1992. An analysis of Bayesian classifiers. In
W. Swartout, editor, Proceedings of the Tenth National Conference on Artificial
Intelligence, San Jose, CA. Menlo Park, CA: AAAI Press, pp. 223–228.
Lawson, C. L., and R. J. Hanson. 1995. Solving least-squares problems. Philadelphia:
SIAM Publications.
le Cessie, S., and J. C. van Houwelingen. 1992. Ridge estimators in logistic regres-
sion. Applied Statistics 41(1):191–201.
Li, M., and P. M. B. Vitanyi. 1992. Inductive reasoning and Kolmogorov complex-
ity. Journal Computer and System Sciences 44:343–384.
Lieberman, H., editor. 2001. Your wish is my command: Programming by example.
San Francisco: Morgan Kaufmann.
Littlestone, N. 1988. Learning quickly when irrelevant attributes abound: A new
linear-threshold algorithm. Machine Learning 2(4):285–318.
———. 1989. Mistake bounds and logarithmic linear-threshold learning algorithms.
PhD Dissertation, University of California, Santa Cruz, CA.
Liu, H., and R. Setiono. 1996. A probabilistic approach to feature selection: A filter
solution. In L. Saitta, editor, Proceedings of the Thirteenth International
Conference on Machine Learning, Bari, Italy. San Francisco: Morgan
Kaufmann, pp. 319–327.
———. 1997. Feature selection via discretization. IEEE Transactions on Knowledge
and Data Engineering 9(4):642–645.
R E F E R E N C E S
4 9 7
P088407-REF.qxd 4/30/05 11:24 AM Page 497
Mann, T. 1993. Library research models: A guide to classification, cataloging, and com-
puters. New York: Oxford University Press.
Marill, T., and D. M. Green. 1963. On the effectiveness of receptors in recognition
systems. IEEE Transactions on Information Theory 9(11):11–17.
Martin, B. 1995. Instance-based learning: Nearest neighbour with generalisation.
MSc Thesis, Department of Computer Science, University of Waikato, New
Zealand.
McCallum, A., and K. Nigam. 1998. A comparison of event models for Naïve
Bayes text classification. In Proceedings of the AAAI-98 Workshop on
Learning for Text Categorization, Madison, WI. Menlo Park, CA: AAAI
Press, pp. 41–48.
Mehta, M., R. Agrawal, and J. Rissanen. 1996. SLIQ: A fast scalable classifier for data
mining. In Apers, P., M. Bouzeghoub, and G. Gardarin, Proceedings of the Fifth
International Conference on Extending Database Technology, Avignon, France.
New York: Springer-Verlag.
Melville, P., and R. J. Mooney. 2005. Creating diversity in ensembles using artificial
data. Information Fusion 6(1):99–111.
Michalski, R. S., and R. L. Chilausky. 1980. Learning by being told and learning from
examples: An experimental comparison of the two methods of knowledge
acquisition in the context of developing an expert system for soybean disease
diagnosis. International Journal of Policy Analysis and Information Systems
4(2).
Michie, D. 1989. Problems of computer-aided concept formation. In J. R. Quinlan,
editor, Applications of expert systems, Vol. 2. Wokingham, England: Addison-
Wesley, pp. 310–333.
Minsky, M., and S. Papert. 1969. Perceptrons. Cambridge, MA: MIT Press.
Mitchell, T. M. 1997. Machine learning. New York: McGraw Hill.
Mitchell, T. M., R. Caruana, D. Freitag, J. McDermott, and D. Zabowski. 1994.
Experience with a learning personal assistant. Communications of the ACM
37(7):81–91.
Moore, A. W. 1991. Efficient memory-based learning for robot control. PhD
Dissertation, Computer Laboratory, University of Cambridge, UK.
———. 2000. The anchors hierarchy: Using the triangle inequality to survive high-
dimensional data. In C. Boutilier and M. Goldszmidt, editors, Proceedings of
the Sixteenth Conference on Uncertainty in Artificial Intelligence, Stanford, CA.
San Francisco: Morgan Kaufmann, pp. 397–405.
4 9 8
R E F E R E N C E S
P088407-REF.qxd 4/30/05 11:24 AM Page 498
Dostları ilə paylaş: |